Conference on Democracy and the Internet

Not much has been going on here for the past four months or so. Not sure that will be changing anytime soon, but I did want to let you all know about a conference at which I’ll be speaking this coming Friday just in case you happen to be in or near Washington D. C.

The conference is titled “American Democracy in the Internet Age” and it will be hosted by the Center for the Study of Statesmanship at Catholic University. You can read more about it here.

Do hope you all are well. As life circumstances have evolved and times have changed, the fate of this blog has ebbed and flowed. I’m not sure, honestly, whether it is long for this digital world or not. Whatever the case, my newsletter, The Convivial Society, is where I’m doing most of my writing these days. You can subscribe here if you are so inclined.

Cheers!

How to Make Twitter Morally Useful in Four Steps

I’ve developed a four-step strategy for making Twitter morally useful.

Step One: Compose your tweet

It will be best to do this with as little reflection and revision as possible. Simply compose your tweet as you are led by external circumstances and internal dispositions. N.B. Quote tweets can be especially instructive for the purposes of this exercise.

Step Two: Hold your tweet up as a mirror of your soul

This, of course, is the difficult part, but, after a moment’s effort, it should not prove all that challenging. But it will require a measure of honesty to oneself and careful attention to what one is actually thinking and feeling. Perhaps you begin with a simple question to yourself:  Why? Why am I tweeting this? Not ostensibly, but in reality. Additionally you might the following:  What do I hope this tweet will accomplish? What is it likely to accomplish? Who is the real, again not ostensible, audience? Etc. You get the idea. Finally, reflect on what the answers to questions like this reveal about you. 

Step Three: Delete the drafted tweet

Its work is done. Delete the draft. Don’t think very long about this. Just do it. Everyone, including you, will be better for it.

Step Four: Repent, do your penance, resolve to be a better human being, etc.

Seek the counsel of your moral/religious/spiritual tradition for how best to proceed along the path of moral growth.

Of course, this post is written in a somewhat facetious spirit, but only somewhat. I should add, too, that I don’t certainly don’t like what I see when I hold Twitter up as a mirror of my soul. And, yes, you could perform this exercise with other platforms; Twitter rather focuses the matter for me.

Language in the Digital Maelstrom

For a time, I taught an English Lit survey class. I often made it a point to observe, in cursory fashion, how the language we call English evolved from Beowulf to Chaucer to Shakespeare and finally to Austen, say, or Elliot. The point was to highlight how language evolves over time, but also to observe the rate at which the language evolved.

430px-Beowulf_Cotton_MS_Vitellius_A_XV_f._132rCaxton's_Canterbury_TalesFirst_Folio,_Shakespeare_-_0759

Please bear in mind that these are the observations of someone who is not a linguist. However, to the casual observer like myself it seems as if the language evolved dramatically from the time of the one surviving medieval manuscript of Beowulf, likely composed c. 1000 AD, to the time of Chaucer’s Canterbury Tales in the late 1300s. And again there is a marked difference between Chaucer’s English and Shakespeare’s, who wrote in the late 1500s (and beyond). But then, while there is change to be sure, Shakespeare’s English seems closer to Austen and ours than his is to Chaucer’s or the Beowulf poet.

The stabilizing force would seem to be the consequences of printing, which played out over time, although, as with most things, the story is complicated. In any case, this brings us, of course, to the consequences of digital media for the evolution of language. Nothing can be said with certainty on this score, and, again, the relationship will turn out to be extremely complex. As I’ve noted elsewhere, following the media ecologists, the transitions from oral to written to printed to electric to digital forms of communication are immensely consequential. But none of the previous transitions can serve as a precise model for the transition to digital. Digital, after all, involves writing and sound and image. It is a complex medium. It retrieves features of oral culture, for example, but also preserves features of print culture. The word I’ve found myself using for its effects is scrambles

I would hazard an observation or two, though. If print was, in some respects, a stabilizing force over time, digital media will be destabilizing to some degree and in ways that are difficult to pin down. This is not necessarily a value judgment, especially if we consider the stabilizing effects of print to be a historically contingent development. Perhaps a better way of putting the matter is to say that digital media churns or stirs up our linguistic waters.

For example, digital tools of communication are still used to convey the written word, of course—you’re reading this right now—and in certain digital contexts, such writing still adheres to more conventional standards based in print culture. In other contexts, however, it does not. The case of spelling comes to mind. Spelling was notoriously irregular until late in the print era. In fact, something like a spelling bee would have been unthinkable until sometime in the 19th century (linguists, please correct if me if I’m wrong about this). But now, the practice of spelling, to the consternation of some, has once again become somewhat irregular in certain rhetorical situations. (It’s interesting to consider how the rise of autofill technologies will play out in this regard. They may very well be a conserving, stabilizing force.)

But this kind of apparent de-stabilization is not the most interesting thing going on. What has caught my attention is the destabilization of meaning. Perhaps we could distinguish between conventional flux (the flux of conventions or standards, that is) and semantic flux. Language has always exhibited both, but at varying rates. What I’m wondering is if we are experience a heightened rate of semantic flux under digital conditions. If so, then I think it would have more to do with how digital media enables disparate communities to enter into dialog with one another—although dialog seems hardly the right word for it—and in something like disembodied real time, the condition of virtual presence. A very large scale case of context collapse, if you will. In this regard, digital media radically accelerates the kind of evolution we might have seen over much longer periods of time.

Words, phrases, concepts—they are generated, disseminated, and rendered meaningless within days. The remarkably short semantic half-life of language, we might say. But these words, phrases, concepts, etc. don’t simply go away, they linger on in a kind of zombie mode:  still used but signifying nothing. The example of this phenomenon that most readily comes to mind is the notorious phrase “fake news.” I’m sure you can supply others. Indeed, virtually every key term or concept that is drawn into or arises out of contested rhetorical contexts is doomed to suffer a similar erosion of meaning.

The underlying assumptions here are simply that language is the foundation of human association and political life and that communication media amount to the infrastructure sustaining our use of language. The nature of the infrastructure inevitably informs the use of language, for better and for worse. Bad actors aside, it’s worth considering whether the scale and speed of communication enabled by digital media are ultimately unable to support what we might think of as sustainable rates of conventional and semantic change.


You can subscribe to my newsletter, The Convivial Society, here.

Baseball, Judgment, and Technocracy

Having been invested in a variety of sports since my youth, I’m basically down to baseball as I come into middle age. I should just go ahead and admit that I have a somewhat romanticized relationship with the sport, which began when, late in my childhood, I started listening to the New York Mets play on the radio. This was an odd and fateful development. It was odd because I was living in Miami, but this was before an expansion team came to Florida and I suppose there were lots of transplanted New Yorkers in town. It was fateful because, of course, becoming a life-long Mets fan is the sort of thing you don’t wish on your enemies, although apparently it is something you foist upon your children who don’t yet know any better.

I’m the type that, in the right mood, will go on and on about the smell of the leather, the crack of the bat, the feel of the grass and dirt underfoot, the pace of the game, the rhythm of the season and how it tracks with the natural patterns of life, death, and rebirth, etc., etc. I will mean it all, while acknowledging the hackneyed sentimentality of it.

I also think baseball offers an interesting vantage point from which to think about technology, or, better, the technocratic spirit. The sport itself seems to contain conflicting tendencies:  one resistant to the technocratic impulse and the other embracing it. On the one hand, one way of thinking about baseball emphasizes its agrarian character, its deliberate pacing, its storied tradition, and so on. Another way of thinking about baseball would emphasize the fixation with numbers, statistics, analytics, etc. Baseball in this vision is an as-yet-to-be-realized technocracy.

This post, I should have mentioned by now, is chiefly held together by baseball, and, somewhat more loosely, by reflection on the theme of judgment. That said, here are some pieces that I’ve read in the last year or so on the theme of baseball, which also raise some good questions about how we relate to technology.

First off, I began mentally drafting this post when I read the first paragraph of a review of a book about Roger Angell’s prolific baseball writing, which spans nearly 70 years. The reviewer opened by recalling Angell’s first column:

“In its May 27, 1950 issue, The New Yorker published Roger Angell’s short, whimsical piece about ‘the decline of privacy,’ a development ‘speeded by electronics’ that was subtly reshaping politics, relationships, and the national pastime. ‘At a recent ball game,’ he reported, ‘a sensitive microphone at home plate picked up the rich comments of one of the team managers to the umpire and sent them winging to thousands of radio sets, instantly turning the listeners into involuntary eavesdroppers.'”

When I read something like this, I immediately wonder how this struck people at the time. I wonder, too, how it strikes us today. It must seem quaint, yet with an air of familiarity. It suggests to me a trajectory. Privacy was not suddenly taken from us. We did not yield it up in one grand Faustian bargain. Rather, we trade it away here and there, acquiesced when it is seized for this reason or that, hardly notice as the structures that sustained it were eroded. Along the way, of course, some have noticed, some have expressed their concerns. But at any given point, until it became much too late, these concerns were too easily dismissed, the pattern was mostly obscured.

It is tempting to think about the relationship between society and technology as a series of grand and sudden disruptions keyed to the arrival of a new device or a new machine. But the relationship between technology and society is complicated by the fact that the realities we think we are naming when we say “technology” and “society” are, in fact, always already deeply intertwined. Techno-social transformations are as likely to gradually unfold, in subterranean fashion, before they are suddenly obvious and become the explicit source of cultural angst.

I’ll now go backwards to the oldest item on the list, a column by Alan Jacobs titled “Giving Up On Baseball.” In this piece, Jacobs, a life-long fan, explained why the game was losing its hold on him. Chiefly, it amounted to the triumph of fine-grained analytics dictating team strategy. As Jacobs succinctly put it, “Strangely enough, baseball was better when we knew less about the most effective way to play it.”

This paradoxical point, with which I tend to agree, raises an interesting question. By way of getting to that question, I’ll first recall for us Heidegger’s distinction between what is correct and what is true. What is correct may not yet be true, in part because it may be incomplete and thus potentially if not actually misleading. Perhaps we might similarly distinguish, along the lines of Jacobs’s analysis, between what is correct and what is good. As Jacobs readily concedes, the analytically sophisticated way of approaching the game yields results. GMs, managers, and players are correct to to pursue its recommendation. However, granting this point, might we not also be able to conclude that while it is correct it is not good. Its correctness obfuscates some larger reality about the game, or the human experience of the game, in which the goodness of the game consists.

We might generalize this observation in this way. The analytically intensive approach to the game is a mode of optimization. Optimization seems to be something like a fundamental value operating at the intersection of technology and society. Like efficiency, it is a value that seems most appropriate to the operation of a machine, but it has seeped into the cultural sphere. It has become a personal value. We seek to optimize both devices and the self. But to what end? Is such optimization good? Perhaps it is correct in this field or that endeavor, but at what cost?

This segues nicely into the next piece, a recent installment of Rob Horning’s excellent newsletter, which is also a weekly dispatch at Real Life. In it, Horning opens with a series of observations about the ever more refined data that is now gathered in a baseball stadium:

“That left the bat at 107 miles per hour and traveled 417 feet.” These figures, often cited with a “how about that!” enthusiasm, are not only advertisements for the new surveillance capacity that is circumscribing the game, but they also evoke the fantasy of a completely datafied world where every act can be rendered “objectively” and be further analyzed. In that world, everyone’s individual contribution can be cleanly separated and perfectly attributed.

From here, Horning winds his way through a discussion of WAR, or Wins Above Replacement, a statistic that intends to render a player’s total value to their team in abstraction from the rest of the team. As Horning puts it, WAR “posits an ideal:  that any positive contribution a player makes can be isolated and measured directly or inferred from other data sets.” This then brings Horner to a discussion of productivity in conversation with Melissa Gregg’s Counterproductive. You should read the whole piece, but this section seemed especially relevant to the path along which this post is unfolding:

Gregg argues that “the labor of time management is a recursive distraction that has postponed the need to identify a worthwhile basis for work as a source of spiritual fulfillment.” Instead, there is a sense that saving time is an end in itself. You don’t need any good ideas about what to spend it on. This unfolds the possibility of a fully gamified life, unfettered by actual games, rules, standings, actual victories — just statistical simulations of wins pegged to tautological efficiency measures that serve no perceptible purpose. As Gregg writes, “personal productivity is an epistemology without an ontology, a framework for knowing what to do in the absence of a guiding principle for doing it.” It’s a treadmill masquerading as a set of goals.

One last item in this meandering post. This one is an interview the philosopher Alva Noë gave the Los Angles Review of Books. I learned in this interview that Noë is also a Mets fan, and so a brother of sorts in the fellowship of the perpetually disappointed. I was chiefly interested, however, in Noë’s discussion of the role of judgment in baseball:

I love the job played by judgment in baseball. Its what makes the game so vital. Baseball highlights the fact that you can’t eliminate judgment from sport, or, I think, from life. Sure, you can count up home runs and strikeouts and work out the rates and percentages. You can use analysis to model and compare players’ performances. But you can’t ever eliminate the fact that what you are quantifying, what you are counting, that whose frequency you are measuring, is always the stuff of judgment — outs, hits, strikes, these are always judgment calls.

“We as a culture are infatuated with the idea that you can eliminate judgment and let the facts themselves be our guide,” Noë adds,  “whether in sports or in social policy. Baseball reminds us that there are limits.”

But not really, because as Noë himself observes a bit later on, the possibility of doing away with umpires in favor of automated decision making does not seem altogether implausible. Noë does not think it will come to this. Perhaps not, who knows. But it does seem as if this is where the demand for correctness inexorably leads us. 

Toward the end of the interview, Noë talks about what worries him about the new “moneyball”:

It eliminates players as agents, players as human beings who are on a team and working together for an outcome, and views them, instead, as mere assemblages of baseball properties that are summed-up by the numbers.

This, I would argue, is a warning that speaks to trends far beyond the world of baseball. This development in baseball is but one instance of a much larger pattern that threatens to swallow up the whole of human affairs.

There’s much more to the interview, and, as with the other pieces, I encourage you to read the whole thing.

One last thought. It seems to me that at some ill-defined point the pursuit of efficiency, optimization, correctness, etc., simply flips in such a way that something essential to our experience is lost. We pass a threshold across which ends are forgotten, truth is obscured, and the good is undermined. It is as if, not unlike Huxley’s Savage, we need to claim the right to be not only unhappy, but also, to give one example, wrong. The status of judgment as a human good obtains only if we can err in judging.


You can subscribe to my newsletter, The Convivial Society, here.

Time, Self, and Remembering Online

Very early on in the life of this blog, memory became a recurring theme. I write less frequently about memory these days, but I’m no less convinced that among the most important consequences of digital media we must count its relationship to memory. After all, as the filmmaker, Louis Bunuel once put it, “Our memory is our coherence, our reason, our feeling, even our action. Without it, we are nothing.”

“What anthropologists distinguish as ‘cultures,’” Ivan Illich has written, “the historian of mental spaces might distinguish as different ‘memories.’” This strikes me as being basically right, and, as Illich knew, different memories arise from different mnemonic technologies.

It seems tricky to quantify this sort of thing or provide precise descriptions of causal mechanisms, etc. but I’d lay it out like this:

  1. We are what we remember
  2. What we remember is a function of how we remember
  3. How we remember, in turn, is a function of our technological milieu.
  4. So, technological restructuring of how we remember is also a restructuring of consciousness, of the self.

So, that said, I recently stumbled upon this tweet from Aaron Lewis: “what if old tweets were displayed with the profile pic you had at the time of posting. a way to differentiate between past and present selves.”

This tweet was provocative in the best sense, it called forth thinking.

I’ll start by noting that there seems to be an assumption here that doesn’t quite hold in practice:  that people are frequently changing profile picks in a way that straightforwardly mirrors how they are changing over time, or even that their profile picture is an image of their face. But the practical feasibility is beside the point for my purposes. Two things interested me:  the problem to which Lewis’s speculative proposal purports to be a solution, and, consequently, what it tells us about older forms of remembering that were not digitally mediated.

So, what is the problem to which Lewis’s proposal is a solution? It seems to be a problem arising from an overabundance of memory, on the one hand, and, on the other, from how that memory relates to our experience of identity. In a follow-up tweet, Lewis added, “it’s disorienting when one of my old tweets resurfaces, wearing the digital mask i’m using here in 2019.”

I’m going to set aside for now an obviously and integrally related matter:  to what degree should our present self be held responsible for the utterances of an older iteration of the self that resurface through the operations of our new memory machines? This is a serious moral question that gets to the heart of our emerging regimes of digital justice, and one that is hotly debated every time that an old tweet or photograph is dug up and used against someone in the present. This is what I’ve taken to referring to as the weaponization of memory (this means that we can both imagine a host of morally distinguishable uses and that environments are restructured whether the weapon is deployed or not). In short, I think the matter is complicated, and I have no cookie-cutter solution. It seems to me that society will need to develop something like a tradition of casuistry to adjudicate such matters equitably and that we still have a long way to go.

Lewis’s observations suggest that our social media platforms, whatever else they may be, are volatile archives of the self. They are archives, and I use the term loosely, because they store slices of the self. Of course, we should acknowledge the fact that the platforms invite performances of the self, which requires us to think more closely about what exactly they are storing:  uncomplicated representations of the self as it is at that point? representations of the self as it wants to be perceived? tokens of the self as it wants to be perceived which are thus implicitly reminders of the self we were via its aspirations? Etc.

They are volatile in that they are active, social archives whose operations trouble the relationship between memory and the self by more widely distributing agency over the memories that constitute the self. Our agency over our self-presentation is distributed among the algorithms which structure the platforms and other users on the platform who have access to our memories and whose intentions toward us will vary wildly.

What I’m reading into Lewis’s proposal then is an impulse, not at all unwarranted, to reassert a measure of agency over the operations of digitally mediated memory. The need to impose this order in turn tells us something about how digitally mediated memory differs from older forms of remembering.

For one thing, the scale and structure of pre-digital memory did not ordinarily generate the same experience of a loss of agency over memory and its relation to the self. We did not have access to the volume of externalized memories we now do, and, more importantly, neither did anyone else. With Lewis’s specific proposal in mind, I’d say that the ratio of remembering and forgetting, and thus of continuity and discontinuity of the self, was differently calibrated, too. To put it another way, what I’m suggesting is that we remembered and forgot in a manner that accorded with a relatively stable experience of the evolving self.  As Derrida once observed, “They tell, and here is the enigma, that those consulting the oracle of Trophonios in Boetia found there two springs and were supposed to drink from each, from the spring of memory and from the spring of forgetting.”

And, even more specifically to Lewis’s point, I’d say that his proposal makes explicit the ordinary and humane rhythms of change and continuity, remembering and forgetting implicit in the co-evolution of self and body over time. “When I was a child,” the Apostle wrote, “I spoke like a child, I thought like a child, I reasoned like a child.” And, we may add, I looked like a child. Thus the appropriateness of my childishness was evident in my appearance. Yes, that was me as I was, but that is no longer me as I now am, and this critical difference was implicit in the evolution of my physical appearance, which signaled as much to all who saw me. No such signals are available to the self as it exists online.

Indeed, we might say that the self that exists online is in one important respect a very poor representation of the self precisely because of its tendency toward completeness of memory. Digital media, particularly social media platforms, condense the rich narrative of the self’s evolution over time into a chaotic and perpetual moment. We might think of it as the self stripped of its story.* In any case, suffice it to note that we find ourselves once more needing to compensate, with little success it would appear, for the absence of the body and the meaning it carries.

Lastly, thinking back to the obviously self-serving push in the last decade by social media companies like Facebook for users to maintain one online identity as a matter of integrity and authenticity, we may now see that demand as paradoxical at best. The algorithmically constituted identity built upon its archives of the self that the platforms impose upon us is a self we never have been nor ever will be. More likely, we will find that it is a self we find ourselves often chasing and sometimes fleeing.


You can subscribe to my newsletter, The Convivial Societyhere.