Don’t Romanticize the Present

Steven Pinker and Jason Hickel have recently engaged in a back-and-forth about whether or not global poverty is decreasing. The first salvo was an essay by Hickel in the Guardian targeting claims made by Bill Gates. Pinker responded here, and Hickel posted his rejoinder at his site.

I’ll let you dive in to the debate if you’re so inclined. The exchange is of interest to me, in part, because evaluations of modern technology are often intertwined with this larger debate about the relative merits of what, for brevity’s sake, we may simply call modernity (although, of course, it’s complicated).

I’m especially interested in a rhetorical move that is often employed in these kinds of debates:  it amounts to the charge of romanticizing the past.

So, for example, Pinker claims, “Hickel’s picture of the past is a romantic fairy tale, devoid of citations or evidence.” I’ll note in passing Hickel’s response, summed up in this line: “All of this violence, and much more, gets elided in your narrative and repackaged as a happy story of progress. And you say I’m the one possessed of romantic fairy tales.” Hickel, in my view, gets the better of Pinker on this point.

In any case, the trope is recurring and, as I see it, tiresome. I wrote about it quite early in the life of this blog when I explained that I did not, in fact, wish to be a medieval peasant.

More recently, Matt Stoller tweeted, “When I criticize big tech monopolies the bad faith response is often a variant of ‘so you want to go back to horses and buggies?!?'” Stoller encountered some variant of this line so often that he was searching for a simple term by which to refer to it. It’s a Borg Complex symptom, as far as I’m concerned.

At a forum about technology and human flourishing I recently attended, the moderator, a fine scholar whose work I admire, explicitly cautioned us in his opening statements against romanticizing the past.

It would take no time at all to find similar examples, especially if you expand “romanticizing the past” to include the equally common charge of reactionary nostalgia. Both betray a palpable anxiousness about upholding the superiority of the present.

I understand the impulse, I really do. I think it was from Alan Jacobs that I first learned about the poet W. H. Auden’s distinction between those whose tendency is to look longingly back at some better age in the past and those who look hopefully toward some ideal future:  Arcadians and Utopians respectively, he called them. Auden took these to be matters of temperament. If so, then I would readily admit to being temperamentally Arcadian. For that reason, I think I well understand the temptation and try to be on guard against it.

That said, stern warnings against romanticizing the past sometimes reveal a susceptibility to another temptation:  romanticizing the present.

This is not altogether surprising. To be modern is to define oneself by one’s location in time, specifically by being on the leading edge of time. Novelty becomes a raison d’être.

As the historian Michael Gillespie has put it,

… to think of oneself as modern is to define one’s being in terms of time. This is remarkable. In previous ages and other places, people have defined themselves in terms of their land or place, their race or ethnic group, their traditions or their gods, but not explicitly in terms of time …  To be modern means to be “new,” to be an unprecedented event in the flow of time, a first beginning, something different than anything that has come before, a novel way of being in the world, ultimately not even a form of being but a form of becoming.

Within this cultural logic, the possibility that something, anything, was better in the past is not only a matter of error, it may be experienced as a threat to one’s moral compass and identity. Over time, perhaps principally through the nineteenth century, progress displaced providence and, consequently, optimism displaced hope. The older theological categories were simply secularized. Capital-P Progress, then, despite its many critics, still does a lot of work within our intellectual and moral frameworks.

Whatever its sources, the knee-jerk charge of romanticizing the past or of succumbing to reactionary nostalgia often amounts to a refusal to think about technology or take responsibility for it.

As the late Paul Virilio once put it, “I believe that you must appreciate technology just like art. You wouldn’t tell an art connoisseur that he can’t prefer abstractionism to expressionism. To love is to choose. And today, we’re losing this. Love has become an obligation.”

We are not obligated to love technology. This is so not only because love, in this instance, ought not to be an obligation but also because there is no such thing as technology. By this I mean simply that technology is a category of dubious utility. If we allow it to stand as an umbrella term for everything from modern dentistry to the apparatus of ubiquitous surveillance, then we are forced to either accept modern technology in toto or reject it in toto. We are thus discouraged from thoughtful discrimination and responsible judgment. It is within this frame that the charge romanticizing the past as a rejoinder to any criticism of technology operates. And it is this frame that we must reject. Modern technology is not good by virtue of its being modern. Past configurations of the techno-social milieu are not bad by virtue of their being past.

We should romanticize neither the past nor the present, nor the future for that matter. We should think critically about how we develop, adopt, and implement technology, so far as it is in our power to do so. Such thinking stands only to benefit from an engagement with the past as, if nothing else, a point of reference. The point, however, is not a retrieval of the past but a better ordering of the present and future.

 

Listening To Those At The Hinges

A passing thought this evening: we should be attentive to the experience and testimony of those whose lives turn out to be the hinges on which one era closes and another opens.

There are several ways to parse that, of course, as many as there are ways of understanding what amounts to a new era. We might, for example, speak of it from any number of perspectives:  a new political era, a new economic era, a new artistic era, etc.

I’m thinking chiefly of new technological eras, of course.

And obviously, I’m thinking of a significant transitions, those whose ramifications spill out into various domains of our personal and social lives. One might think of the transitions marked by the advent of printing, electric power grids, or the automobile.

In cases drawn from the more distant past—printing, for instance, or even the development of writing—it may be harder to pinpoint a hinge generation because the changes played out at a relatively slow place. The closer we get to our present time, though, it would seem that transitions unfold more rapidly:  within a lifetime rather than across lifetimes.

The most obvious case in point, one that many of us are able to speak of from personal experience, is the transition from the world before the commercial internet to the world after. I’m among those old enough to have a living memory of the world before the internet; AOL came to my home as I approached twenty years of age. Perhaps you are as well, or perhaps you have no memory of a world in which the internet was not a pervasive fact of life.

I suspect the development of the smartphone is also similarly consequential. There are more of us, of course, who remember the world before smartphones became more or less ubiquitous in the developed world, but already there are those entering adulthood for whom that is not the case.

I was reminded of a couple of paragraphs from Michael Heim’s 1984 book on the meaning of word processing technology. Responding to those who might wonder whether it was too soon to take stock of a then-nascent technology, Heim writes,

“Yet it is precisely this point in time that causes us to become philosophical. For it is at the moment of such transitions that the past becomes clear as a past, as obsolescent, and the future becomes clear as destiny, a challenge of the unknown. A philosophical study of digital writing made five or ten years from now would be better than one written now in the sense of being more comprehensive, more fully certain in its grasp of the new writing. At the same time, however, the felt contrast with the older writing technology would have become faded by the gradually increasing distance from typewritten and mechanical writing. Like our involvement with the automobile, that with processing texts will grow in transparency—until it becomes a condition of our daily life, taken for granted.

But what is granted to us in each epoch was at one time a beginning, a start, a change that was startling. Though the conditions of daily living do become transparent, they still draw upon our energies and upon the time of our lives; they soon become necessary conditions and come to structure our lives. It is incumbent on us then to grow philosophical while we can still be startled, for philosophy, if Aristotle can be trusted, begins in wonder, and, as Heraclitus suggests, ‘One should not act or speak as if asleep.’”

I’m thinking about this just now after taking a look at Christopher Mims’s piece this morning in the Wall Street Journal, “Generation Z’s 7 Lessons for Surviving in Our Tech-Obsessed World.” Lesson six, for example, reads, “Gen Z thinks concerns about screens are overblown.”

My point is not so much that this is wrong, although I tend to think that it is, my point is that this isn’t really a lesson so much as it is the testimony of some people’s experience. As such it is fine, but it also happens to be the testimony of people who may not exactly have at least one relevant, if not critical, point of comparison. To put the matter more pointedly, the rejoinder that flits into my mind is simply this: What do they know?

That’s not entirely fair, of course. They know some things I don’t, I’m sure. But how do we form judgements when we can’t quite imagine the world otherwise? It is more than that, though. I suppose with enough information and a measure of empathy, one can begin to imagine a wold that is no longer the case. But you can’t quite feel it in the way that those with a living memory of the experience of being alive before the world turned over can.

If we care to understand the meaning of change, we should heed the testimony of those on whose lives the times have hinged. Their perspective and the kind of knowledge they carry, difficult to articulate as it may be, is unique and valuable.

As I have typed this post out, I had that sense that I’ve tread this ground before, and, indeed, I have written along very similar lines a few years back. If it is all the same with you, dear reader, I’m going to close with what I wrote then. I’m not sure that I could improve very much on it now. What follows draws on an essay by Jonathan Franzen. I know how we are all supposed to feel about Franzen, but just let that go for a moment. He was more right than wrong, I think, when he wrote, reflecting on the work of Karl Krauss, that “As long as modernity lasts, all days will feel to someone like the last days of humanity,” what he called our “personal apocalypses.”

This is, perhaps, a bit melodramatic, and it is certainly not all that could be said on the matter, or all that should be said. But Franzen is telling us something about what it feels like to be alive these days. It’s true, Franzen is not the best public face for those who are marginalized and swept aside by the tides of technological change, tides which do not lift all boats, tides which may, in fact, sink a great many. But there are such people, and we do well to temper our enthusiasm long enough to enter, so far as it is possible, into their experience. In fact, precisely because we do not have a common culture to fall back on, we must work extraordinarily hard to understand one another.

Franzen is still working on the assumption that these little personal apocalypses are a generational phenomenon. I’d argue that he’s underestimated the situation. The rate of change may be such that the apocalypses are now intra-generational. It is not simply that my world is not my parents’ world; it is that my world now is not what my world was a decade ago. We are all exiles now, displaced from a world we cannot reach because it fades away just as its contours begin to materialize. This explains why, as I wrote earlier this year, nostalgia is not so much a desire for a place or a time as it is a desire for some lost version of ourselves. We are like Margaret, who in Hopkins’ poem, laments the passing of the seasons, Margaret to whom the poet’s voice says kindly, “It is Margaret you mourn for.”

Although I do believe that certain kinds of change ought to be resisted—I’d be a fool not to—none of what I’ve been trying to get at in this post is about resisting change in itself. Rather, I think all I’ve been trying to say is this: we must learn to take account of how differently we experience the changing world so that we might best help one another as we live through the change that must come. That is all.


If you’ve appreciated what you’ve read, consider supporting or tipping the writer.

Privacy Is Not Dying, We’re Killing It

I’ve been thinking again about how the virtues we claim to desire are so often undermined by the technologically-mediated practices in which we persistently engage. Put very simply, what we say we value is frequently undermined by what we actually do. We often fail to recognize this because we tend to forget that how we do something (and what we do it with) can be as important, in the long run anyway, as what we do or even why we do it. This is one way of understanding the truth captured by McLuhan’s dictum, “the medium is the message.”

The immediate catalyst for this line of thought was a recent Kara Swisher op-ed about Jeff Bezos’s leaked tweets and the “death of privacy.” In it, Swisher writes, “The sweet nothings that Mr. Bezos was sending to one person should not have turned into tweets for the entire world to see and, worse, that most everyone assumed were O.K. to see.”

She concludes her piece with these reflections:

Of course, we also do this to ourselves. Not to blame the victim (and there are plenty of those in this case, including Mr. Bezos), but we choose to put ourselves on display. Post your photos from your vacation to Aruba on the ever-changing wall of the performative museum that is Instagram? Sure! Write a long soliloquy about your fights with neighbors and great-uncles on Facebook? Sign me up! Tweet about a dishwasher mishap with a big box retailer on Twitter? Wait, that’s me.

I think you get my point here. We are both the fodder for and the creators of the noise pollution that is mucking up so much, including the national discourse. That national discourse just gave us Mr. Bezos as one day’s entertainment.

Are you not entertained? I am, and I am also ashamed.

It was refreshing to read that, frankly. More often than not in these cases, we tend to focus on the depredations of the tech companies involved or the technological accident or the bad actors, and, of course, there’s often plenty to focus on in each case. But this line of analysis sometimes lets us off the hook a bit too easily. Privacy may not be dead but it’s morphing, and it is doing so in part because of how we habitually conduct ourselves and how our tools mediate our perception.

So we say we value privacy, but we hardly understand what we mean by it. Privacy flourishes in the attention economy to the same degree that contentment flourishes in the consumer economy, which is to say not at all. Quietly and without acknowledging as much, we’ve turned the old virtue into a vice.

We want to live in public but also control what happens to the slices of life we publicize. Or we recoil at the thought of our foibles being turned into one day’s entertainment on Twitter but, as Swisher notes, we nonchalantly consume such entertainment when someone else is the victim.

Swisher’s “Are you not entertained?” obviously recalls Maximus’s demand of the audience in the film, Gladiator. It may seem like an absurd question, but let us at least consider it for a moment:  how different is a Twitter mob from the ancient audiences of the gladiatorial spectacle? Do we believe that such mobs can’t issue forth in “real world” violence or that they cannot otherwise destroy a life? One difference of consequence, I suppose, is that at least the ancient audience did not bathe itself in self-righteousness.

Naturally, all of this is just an extension of what used to be the case with celebrities in the age of electronic media and print tabloids. Digital media simply democratizes both the publicity and its consequences. Celebrity is now virality, and it could happen, potentially, to anyone, whether they want it to or not. The fifteen minutes of fame Warhol predicted have become whatever the life cycle of a hashtag happens to be, only the consequences linger far longer. Warhol did not anticipate the revenge of memory in the digital age.

The Discourse is Poisoned, But You Already Knew That

It occurred to me today that one of Nathaniel Hawthorne’s short stories may indirectly teach us something about the dysfunctional dynamics of social media discourse. The story is “Young Goodman Brown,” you may remember it from a high school or undergrad English class. It is one of Hawthorne’s most frequently anthologized tales.

The story is about a seemingly ordinary young man who ventures out to the forest for an undisclosed but apparently illicit errand, leaving behind his newlywed bride, Faith. Along the way, he meets a peculiar but distinguished man, who claims to have known his father and his father’s father. The distinguished gentleman turns out to be the Devil, of course, and, by and by, reveals to young goodman Brown how all of the town’s most upstanding citizens, ministers and civic leaders included, are actually his very own subjects. Brown even discovers that his dear Faith has found herself in midst of the infernal gathering in the forest.

At the climactic point, Brown cries out, “Faith! Faith! Look up to Heaven, and resist the Wicked One!”

What happens next, Hawthorne cloaks in mystery. Brown suddenly, as if awakening from a dream, finds himself alone in the quiet forest with no trace of another person anywhere. Although he remained uncertain about what had happened to him, he emerged a changed man. Now he could not look on anyone without seeing vice and hypocrisy. He trusted no one and imagined everyone, no matter how saintly, to be a vile hypocrite. “A stern, a sad, a darkly meditative, a distrustful, if not a desperate man, did he become, from the night of that fearful dream.”

Hawthorne, who, in my view, is particularly adept in his exploration of intersubjectivity, presents us with a story about knowledge acquired and innocence lost. The knowledge he gains, however, is putatively about the true nature of those around him, but the story leaves the status of that knowledge in doubt. It is enough, though, that Brown believes that he knows the truth. This knowledge, of course, poisons all his relationships and ultimately destroys him.

I recalled Hawthorne’s story this morning as I read Roger Berkowitz’s discussion of a recent essay by Peter Baehr, “Against unmasking: five techniques that destroy dialogue and how to avoid them.”  Berkowitz is the director of the Hannah Arendt Center for Politics and Humanities and his comments draw on Arendt’s On Revolution:

“The human heart is a place of darkness.” Thus, the demand for purity in motives is an impossible demand that “transforms all actors into hypocrites.”

For Arendt, “the moment the display of motives begins, hypocrisy begins to poison all human relations.” And unmasking is the name that Arendt gives to the politics of virtue, the demand that all hypocrites be unmasked.

The danger in unmasking is that all people must, as people, where [sic] masks. The word “person” is from the Latin persona, which means literally to “sound-through.” A person in ancient Rome was a citizen, one who wore the mask of citizenship. “Without his persona, there would be an individual without rights and duties, perhaps a ‘natural man’ … but certainly a politically irrelevant being.”

For his part, Baehr wrote,

Unmasking inverts people’s statements and makes them look foolish. It reduces a concept or a theory to the supposed ideological position of the writer. It trades on a mistaken concept of illusion. And, more generally, it burdens enquiry with a radical agenda of emancipation that people of different views have no reason to accept as valid. In politics, writers who adopt the unmasking style repeatedly treat other people not as fellow citizens with rival views of the good but as villains. In some revolutionary situations unmasking weaponization is the rhetoric of mass murder. And beyond these extremes, unmasking stokes mutual contempt.

In Hawthorne’s story, Brown ends up an exemplar of just such an unmasking style. He believes that he has seen all of his fellow citizens unmasked, and he treats them accordingly.

If you come to believe that you know the truth about someone or that you know their “real” self, then there is nothing else to learn. There is no need to listen or to observe. You can assume that every argument is conducted in bad faith. There can, in fact, be no dialogue at all. Conversation is altogether precluded.

In a recent essay that was widely discussed, philosopher Justin E. H. Smith explores these dynamics by way of that bingo card meme listing a set of predictable comments you can expect from a person that has been ideologically pigeonholed. The presumption is that such a person’s view “is not actually a considered view at all, but only an algorithmically predictable bit of output from the particular program he is running.”

“But something’s wrong here,” he goes on to say, commenting on what is appreciation of William Burroughs does and does not mean:

Burroughs does not in fact entail the others, and the strong and mature part of the self—that is to say the part that resists the forces that would beat all human subjectivity down into an algorithm—knows this. But human subjects are vanishingly small beneath the tsunami of likes, views, clicks and other metrics that is currently transforming selves into financialized vectors of data. This financialization is complete, one might suppose, when the algorithms make the leap from machines originally meant only to assist human subjects, into the way these human subjects constitute themselves and think about themselves, their tastes and values, and their relations with others.

“This gutting of our human subjecthood,” Smith goes on to argue, “is currently being stoked and exacerbated, and integrated into a causal loop with, the financial incentives of the tech companies. People are now speaking in a way that results directly from the recent moneyballing of all of human existence.”

I would add just one more dimension to these considerations. The unmasking style takes on a distinct quality on social media, and I’m not sure that Smith’s essay touches on this.

Imagine a young goodman Brown today venturing out into the dark forests of social media. What would be revealed to him by the distinguished gentlemen is not so much the moral failure or vices of his contemporaries but rather their supposed inauthenticity. It would be revealed to him that everyone is in on the moneyballing of human subjectivity, everyone is in on the attention racket.

Consider the Instagram egg. You know, the stock photo of an egg that quickly became the most “liked” image in Instagrams history. Writing about the account for Wired, Louise Matsakis explained,

the creator is clearly familiar with some of the engines of online fame. They tagged a number of people and publications that regularly cover viral memes like LADBible, Mashable, BuzzFeed, the Daily Mail, and YouTube star PewDiePie in the egg photo itself. Once the account took off, they began to sell “official” merchandise, including T-shirts that say “I LIKED THE EGG,” which go for $19.50.

Moreover,

The account behind the egg, which has nearly 6 million followers, is also an incredibly valuable marketing vehicle. Brands could spend thousands of dollars to advertise with it, according to sources in the influencer marketing industry who spoke with Recode. When the egg goes out of style, its creator could also sell the account or pivot to sharing another type of content with the massive audience they’ve already attracted.

There’s more, but you get the point. The lesson, of sorts, to which the author brought the articles was this: “The egg and other seemingly meaningless internet fads are beloved because they stand in contrast to the manicured, optimized content that often fills our social media feeds, especially from people like the Kardashians.”

I don’t know. It seems to me that the contrast is actually only superficial. They are all playing the game. Or rather, they are playing the game, the rest of us think we’re playing the game but, in fact, are simply getting played. But the point, for present purposes, is that we take for granted that, whether they are winning or losing, everyone is playing the attention game, which, we reason, tacitly perhaps, implies that they are by definition conducting themselves in bad faith. And that is why, to borrow the title of Smith’s essay, it’s all over.

It is one thing to believe that someone is a moral hypocrite and should not be trusted. It is another thing altogether to believe that no one is ever, whether in matters of politics or in the expression of their most inane affinities, doing anything more than posturing for attention of some kind or another. That poisons everything. And so long as we are engaging on social media, especially with people we do not otherwise know offline, it is virtually unavoidable. The structure of social media demands that we internally experience ourselves as performers calling on the attention of an audience. And we know, whether we hate ourselves for it or not, that the insidious metrics always threaten to enslave us. So, we assume, it must be with everyone else.

We are thus tempted simultaneously, and somewhat paradoxically, to believe that those we encounter online are necessarily involved in an inauthentic identity game and that we are capable of ascertaining the truth about them, even on the slimmest of evidence. (Viewed in this light, there is, in fact, a dialectical relationship between the kind of subjectivity Smith commends and the consequences of the moneyballing he decries.) Or, to put it another way, we believe we know the truth about everyone and the truth we know is that there is no truth to be known. So our public sphere takes on not a cynical quality, but a nihilistic one. That is the difference between believing that everyone is a moral hypocrite and believing that everyone is inauthentically posturing for attention.

It is difficult to see how any good thing can come out of these conditions, especially when they spill out beyond social media platforms, as they necessarily must.

Technology Is a Branch of Moral Philosophy

“Whether or not it draws on new scientific research technology is a branch of moral philosophy, not of science.”

Or so argued Paul Goodman in a 1969 essay for the New York Review of Books titled “Can Technology Be Humane?” I first encountered the line in Postman’s Technopoly and was reminded of it a few years back in a post by Nick Carr. I commented briefly on the essay around that same time, but found myself thinking about that line again more recently. Goodman’s essay, if nothing else, warrants our consideration for how it presages and informs some of the present discussion about humane technology.

Technology, Goodman went on to write in explanation of his claim,

“aims at prudent goods for the commonweal and to provide efficient means for these goods. At present, however, ‘scientific technology’ occupies a bastard position in the universities, in funding, and in the public mind. It is half tied to the theoretical sciences and half treated as mere know-how for political and commercial purposes. It has no principles of its own.”

What to do about this? Re-organize the whole technological enterprise:

“To remedy this—so Karl Jaspers in Europe and Robert Hutchins in America have urged—technology must have its proper place on the faculty as a learned profession important in modern society, along with medicine, law, the humanities, and natural philosophy, learning from them and having something to teach them. As a moral philosopher, a technician should be able to criticize the programs given him to implement. As a professional in a community of learned professionals, a technologist must have a different kind of training and develop a different character than we see at present among technicians and engineers. He should know something of the social sciences, law, the fine arts, and medicine, as well as relevant natural sciences.”

Clearly this was a far more robust program than contemporary attempts to shoehorn an ethics class into engineering and computer science programs, however noble the intent. Equally clearly, it almost impossible to imagine such a re-organization. That horizon of opportunity closed, if it was ever truly open.

Regarding the virtue of prudence, Goodman wrote,

“Prudence is foresight, caution, utility. Thus it is up to the technologists, not to regulatory agencies of the government, to provide for safety and to think about remote effects. This is what Ralph Nader is saying and Rachel Carson used to ask. An important aspect of caution is flexibility, to avoid the pyramiding catastrophe that occurs when something goes wrong in interlocking technologies, as in urban power failures. Naturally, to take responsibility for such things often requires standing up to the front office and urban politicians, and technologists must organize themselves in order to have power to do it.”

There’s much else to consider in the essay, even if only in the vein of considering an alternative historical reality that might have unfolded. In fact, though, there is much that remains usefully suggestive should we care to take stock and reconsider how we relate to technology. That last line certainly calls to mind recent efforts by tech workers at various firms, including Google,  to organize in opposition to projects they deemed immoral or unjust.