Language in the Digital Maelstrom

For a time, I taught an English Lit survey class. I often made it a point to observe, in cursory fashion, how the language we call English evolved from Beowulf to Chaucer to Shakespeare and finally to Austen, say, or Elliot. The point was to highlight how language evolves over time, but also to observe the rate at which the language evolved.

430px-Beowulf_Cotton_MS_Vitellius_A_XV_f._132rCaxton's_Canterbury_TalesFirst_Folio,_Shakespeare_-_0759

Please bear in mind that these are the observations of someone who is not a linguist. However, to the casual observer like myself it seems as if the language evolved dramatically from the time of the one surviving medieval manuscript of Beowulf, likely composed c. 1000 AD, to the time of Chaucer’s Canterbury Tales in the late 1300s. And again there is a marked difference between Chaucer’s English and Shakespeare’s, who wrote in the late 1500s (and beyond). But then, while there is change to be sure, Shakespeare’s English seems closer to Austen and ours than his is to Chaucer’s or the Beowulf poet.

The stabilizing force would seem to be the consequences of printing, which played out over time, although, as with most things, the story is complicated. In any case, this brings us, of course, to the consequences of digital media for the evolution of language. Nothing can be said with certainty on this score, and, again, the relationship will turn out to be extremely complex. As I’ve noted elsewhere, following the media ecologists, the transitions from oral to written to printed to electric to digital forms of communication are immensely consequential. But none of the previous transitions can serve as a precise model for the transition to digital. Digital, after all, involves writing and sound and image. It is a complex medium. It retrieves features of oral culture, for example, but also preserves features of print culture. The word I’ve found myself using for its effects is scrambles

I would hazard an observation or two, though. If print was, in some respects, a stabilizing force over time, digital media will be destabilizing to some degree and in ways that are difficult to pin down. This is not necessarily a value judgment, especially if we consider the stabilizing effects of print to be a historically contingent development. Perhaps a better way of putting the matter is to say that digital media churns or stirs up our linguistic waters.

For example, digital tools of communication are still used to convey the written word, of course—you’re reading this right now—and in certain digital contexts, such writing still adheres to more conventional standards based in print culture. In other contexts, however, it does not. The case of spelling comes to mind. Spelling was notoriously irregular until late in the print era. In fact, something like a spelling bee would have been unthinkable until sometime in the 19th century (linguists, please correct if me if I’m wrong about this). But now, the practice of spelling, to the consternation of some, has once again become somewhat irregular in certain rhetorical situations. (It’s interesting to consider how the rise of autofill technologies will play out in this regard. They may very well be a conserving, stabilizing force.)

But this kind of apparent de-stabilization is not the most interesting thing going on. What has caught my attention is the destabilization of meaning. Perhaps we could distinguish between conventional flux (the flux of conventions or standards, that is) and semantic flux. Language has always exhibited both, but at varying rates. What I’m wondering is if we are experience a heightened rate of semantic flux under digital conditions. If so, then I think it would have more to do with how digital media enables disparate communities to enter into dialog with one another—although dialog seems hardly the right word for it—and in something like disembodied real time, the condition of virtual presence. A very large scale case of context collapse, if you will. In this regard, digital media radically accelerates the kind of evolution we might have seen over much longer periods of time.

Words, phrases, concepts—they are generated, disseminated, and rendered meaningless within days. The remarkably short semantic half-life of language, we might say. But these words, phrases, concepts, etc. don’t simply go away, they linger on in a kind of zombie mode:  still used but signifying nothing. The example of this phenomenon that most readily comes to mind is the notorious phrase “fake news.” I’m sure you can supply others. Indeed, virtually every key term or concept that is drawn into or arises out of contested rhetorical contexts is doomed to suffer a similar erosion of meaning.

The underlying assumptions here are simply that language is the foundation of human association and political life and that communication media amount to the infrastructure sustaining our use of language. The nature of the infrastructure inevitably informs the use of language, for better and for worse. Bad actors aside, it’s worth considering whether the scale and speed of communication enabled by digital media are ultimately unable to support what we might think of as sustainable rates of conventional and semantic change.


You can subscribe to my newsletter, The Convivial Society, here.

Baseball, Judgment, and Technocracy

Having been invested in a variety of sports since my youth, I’m basically down to baseball as I come into middle age. I should just go ahead and admit that I have a somewhat romanticized relationship with the sport, which began when, late in my childhood, I started listening to the New York Mets play on the radio. This was an odd and fateful development. It was odd because I was living in Miami, but this was before an expansion team came to Florida and I suppose there were lots of transplanted New Yorkers in town. It was fateful because, of course, becoming a life-long Mets fan is the sort of thing you don’t wish on your enemies, although apparently it is something you foist upon your children who don’t yet know any better.

I’m the type that, in the right mood, will go on and on about the smell of the leather, the crack of the bat, the feel of the grass and dirt underfoot, the pace of the game, the rhythm of the season and how it tracks with the natural patterns of life, death, and rebirth, etc., etc. I will mean it all, while acknowledging the hackneyed sentimentality of it.

I also think baseball offers an interesting vantage point from which to think about technology, or, better, the technocratic spirit. The sport itself seems to contain conflicting tendencies:  one resistant to the technocratic impulse and the other embracing it. On the one hand, one way of thinking about baseball emphasizes its agrarian character, its deliberate pacing, its storied tradition, and so on. Another way of thinking about baseball would emphasize the fixation with numbers, statistics, analytics, etc. Baseball in this vision is an as-yet-to-be-realized technocracy.

This post, I should have mentioned by now, is chiefly held together by baseball, and, somewhat more loosely, by reflection on the theme of judgment. That said, here are some pieces that I’ve read in the last year or so on the theme of baseball, which also raise some good questions about how we relate to technology.

First off, I began mentally drafting this post when I read the first paragraph of a review of a book about Roger Angell’s prolific baseball writing, which spans nearly 70 years. The reviewer opened by recalling Angell’s first column:

“In its May 27, 1950 issue, The New Yorker published Roger Angell’s short, whimsical piece about ‘the decline of privacy,’ a development ‘speeded by electronics’ that was subtly reshaping politics, relationships, and the national pastime. ‘At a recent ball game,’ he reported, ‘a sensitive microphone at home plate picked up the rich comments of one of the team managers to the umpire and sent them winging to thousands of radio sets, instantly turning the listeners into involuntary eavesdroppers.'”

When I read something like this, I immediately wonder how this struck people at the time. I wonder, too, how it strikes us today. It must seem quaint, yet with an air of familiarity. It suggests to me a trajectory. Privacy was not suddenly taken from us. We did not yield it up in one grand Faustian bargain. Rather, we trade it away here and there, acquiesced when it is seized for this reason or that, hardly notice as the structures that sustained it were eroded. Along the way, of course, some have noticed, some have expressed their concerns. But at any given point, until it became much too late, these concerns were too easily dismissed, the pattern was mostly obscured.

It is tempting to think about the relationship between society and technology as a series of grand and sudden disruptions keyed to the arrival of a new device or a new machine. But the relationship between technology and society is complicated by the fact that the realities we think we are naming when we say “technology” and “society” are, in fact, always already deeply intertwined. Techno-social transformations are as likely to gradually unfold, in subterranean fashion, before they are suddenly obvious and become the explicit source of cultural angst.

I’ll now go backwards to the oldest item on the list, a column by Alan Jacobs titled “Giving Up On Baseball.” In this piece, Jacobs, a life-long fan, explained why the game was losing its hold on him. Chiefly, it amounted to the triumph of fine-grained analytics dictating team strategy. As Jacobs succinctly put it, “Strangely enough, baseball was better when we knew less about the most effective way to play it.”

This paradoxical point, with which I tend to agree, raises an interesting question. By way of getting to that question, I’ll first recall for us Heidegger’s distinction between what is correct and what is true. What is correct may not yet be true, in part because it may be incomplete and thus potentially if not actually misleading. Perhaps we might similarly distinguish, along the lines of Jacobs’s analysis, between what is correct and what is good. As Jacobs readily concedes, the analytically sophisticated way of approaching the game yields results. GMs, managers, and players are correct to to pursue its recommendation. However, granting this point, might we not also be able to conclude that while it is correct it is not good. Its correctness obfuscates some larger reality about the game, or the human experience of the game, in which the goodness of the game consists.

We might generalize this observation in this way. The analytically intensive approach to the game is a mode of optimization. Optimization seems to be something like a fundamental value operating at the intersection of technology and society. Like efficiency, it is a value that seems most appropriate to the operation of a machine, but it has seeped into the cultural sphere. It has become a personal value. We seek to optimize both devices and the self. But to what end? Is such optimization good? Perhaps it is correct in this field or that endeavor, but at what cost?

This segues nicely into the next piece, a recent installment of Rob Horning’s excellent newsletter, which is also a weekly dispatch at Real Life. In it, Horning opens with a series of observations about the ever more refined data that is now gathered in a baseball stadium:

“That left the bat at 107 miles per hour and traveled 417 feet.” These figures, often cited with a “how about that!” enthusiasm, are not only advertisements for the new surveillance capacity that is circumscribing the game, but they also evoke the fantasy of a completely datafied world where every act can be rendered “objectively” and be further analyzed. In that world, everyone’s individual contribution can be cleanly separated and perfectly attributed.

From here, Horning winds his way through a discussion of WAR, or Wins Above Replacement, a statistic that intends to render a player’s total value to their team in abstraction from the rest of the team. As Horning puts it, WAR “posits an ideal:  that any positive contribution a player makes can be isolated and measured directly or inferred from other data sets.” This then brings Horner to a discussion of productivity in conversation with Melissa Gregg’s Counterproductive. You should read the whole piece, but this section seemed especially relevant to the path along which this post is unfolding:

Gregg argues that “the labor of time management is a recursive distraction that has postponed the need to identify a worthwhile basis for work as a source of spiritual fulfillment.” Instead, there is a sense that saving time is an end in itself. You don’t need any good ideas about what to spend it on. This unfolds the possibility of a fully gamified life, unfettered by actual games, rules, standings, actual victories — just statistical simulations of wins pegged to tautological efficiency measures that serve no perceptible purpose. As Gregg writes, “personal productivity is an epistemology without an ontology, a framework for knowing what to do in the absence of a guiding principle for doing it.” It’s a treadmill masquerading as a set of goals.

One last item in this meandering post. This one is an interview the philosopher Alva Noë gave the Los Angles Review of Books. I learned in this interview that Noë is also a Mets fan, and so a brother of sorts in the fellowship of the perpetually disappointed. I was chiefly interested, however, in Noë’s discussion of the role of judgment in baseball:

I love the job played by judgment in baseball. Its what makes the game so vital. Baseball highlights the fact that you can’t eliminate judgment from sport, or, I think, from life. Sure, you can count up home runs and strikeouts and work out the rates and percentages. You can use analysis to model and compare players’ performances. But you can’t ever eliminate the fact that what you are quantifying, what you are counting, that whose frequency you are measuring, is always the stuff of judgment — outs, hits, strikes, these are always judgment calls.

“We as a culture are infatuated with the idea that you can eliminate judgment and let the facts themselves be our guide,” Noë adds,  “whether in sports or in social policy. Baseball reminds us that there are limits.”

But not really, because as Noë himself observes a bit later on, the possibility of doing away with umpires in favor of automated decision making does not seem altogether implausible. Noë does not think it will come to this. Perhaps not, who knows. But it does seem as if this is where the demand for correctness inexorably leads us. 

Toward the end of the interview, Noë talks about what worries him about the new “moneyball”:

It eliminates players as agents, players as human beings who are on a team and working together for an outcome, and views them, instead, as mere assemblages of baseball properties that are summed-up by the numbers.

This, I would argue, is a warning that speaks to trends far beyond the world of baseball. This development in baseball is but one instance of a much larger pattern that threatens to swallow up the whole of human affairs.

There’s much more to the interview, and, as with the other pieces, I encourage you to read the whole thing.

One last thought. It seems to me that at some ill-defined point the pursuit of efficiency, optimization, correctness, etc., simply flips in such a way that something essential to our experience is lost. We pass a threshold across which ends are forgotten, truth is obscured, and the good is undermined. It is as if, not unlike Huxley’s Savage, we need to claim the right to be not only unhappy, but also, to give one example, wrong. The status of judgment as a human good obtains only if we can err in judging.


You can subscribe to my newsletter, The Convivial Society, here.

Time, Self, and Remembering Online

Very early on in the life of this blog, memory became a recurring theme. I write less frequently about memory these days, but I’m no less convinced that among the most important consequences of digital media we must count its relationship to memory. After all, as the filmmaker, Louis Bunuel once put it, “Our memory is our coherence, our reason, our feeling, even our action. Without it, we are nothing.”

“What anthropologists distinguish as ‘cultures,’” Ivan Illich has written, “the historian of mental spaces might distinguish as different ‘memories.’” This strikes me as being basically right, and, as Illich knew, different memories arise from different mnemonic technologies.

It seems tricky to quantify this sort of thing or provide precise descriptions of causal mechanisms, etc. but I’d lay it out like this:

  1. We are what we remember
  2. What we remember is a function of how we remember
  3. How we remember, in turn, is a function of our technological milieu.
  4. So, technological restructuring of how we remember is also a restructuring of consciousness, of the self.

So, that said, I recently stumbled upon this tweet from Aaron Lewis: “what if old tweets were displayed with the profile pic you had at the time of posting. a way to differentiate between past and present selves.”

This tweet was provocative in the best sense, it called forth thinking.

I’ll start by noting that there seems to be an assumption here that doesn’t quite hold in practice:  that people are frequently changing profile picks in a way that straightforwardly mirrors how they are changing over time, or even that their profile picture is an image of their face. But the practical feasibility is beside the point for my purposes. Two things interested me:  the problem to which Lewis’s speculative proposal purports to be a solution, and, consequently, what it tells us about older forms of remembering that were not digitally mediated.

So, what is the problem to which Lewis’s proposal is a solution? It seems to be a problem arising from an overabundance of memory, on the one hand, and, on the other, from how that memory relates to our experience of identity. In a follow-up tweet, Lewis added, “it’s disorienting when one of my old tweets resurfaces, wearing the digital mask i’m using here in 2019.”

I’m going to set aside for now an obviously and integrally related matter:  to what degree should our present self be held responsible for the utterances of an older iteration of the self that resurface through the operations of our new memory machines? This is a serious moral question that gets to the heart of our emerging regimes of digital justice, and one that is hotly debated every time that an old tweet or photograph is dug up and used against someone in the present. This is what I’ve taken to referring to as the weaponization of memory (this means that we can both imagine a host of morally distinguishable uses and that environments are restructured whether the weapon is deployed or not). In short, I think the matter is complicated, and I have no cookie-cutter solution. It seems to me that society will need to develop something like a tradition of casuistry to adjudicate such matters equitably and that we still have a long way to go.

Lewis’s observations suggest that our social media platforms, whatever else they may be, are volatile archives of the self. They are archives, and I use the term loosely, because they store slices of the self. Of course, we should acknowledge the fact that the platforms invite performances of the self, which requires us to think more closely about what exactly they are storing:  uncomplicated representations of the self as it is at that point? representations of the self as it wants to be perceived? tokens of the self as it wants to be perceived which are thus implicitly reminders of the self we were via its aspirations? Etc.

They are volatile in that they are active, social archives whose operations trouble the relationship between memory and the self by more widely distributing agency over the memories that constitute the self. Our agency over our self-presentation is distributed among the algorithms which structure the platforms and other users on the platform who have access to our memories and whose intentions toward us will vary wildly.

What I’m reading into Lewis’s proposal then is an impulse, not at all unwarranted, to reassert a measure of agency over the operations of digitally mediated memory. The need to impose this order in turn tells us something about how digitally mediated memory differs from older forms of remembering.

For one thing, the scale and structure of pre-digital memory did not ordinarily generate the same experience of a loss of agency over memory and its relation to the self. We did not have access to the volume of externalized memories we now do, and, more importantly, neither did anyone else. With Lewis’s specific proposal in mind, I’d say that the ratio of remembering and forgetting, and thus of continuity and discontinuity of the self, was differently calibrated, too. To put it another way, what I’m suggesting is that we remembered and forgot in a manner that accorded with a relatively stable experience of the evolving self.  As Derrida once observed, “They tell, and here is the enigma, that those consulting the oracle of Trophonios in Boetia found there two springs and were supposed to drink from each, from the spring of memory and from the spring of forgetting.”

And, even more specifically to Lewis’s point, I’d say that his proposal makes explicit the ordinary and humane rhythms of change and continuity, remembering and forgetting implicit in the co-evolution of self and body over time. “When I was a child,” the Apostle wrote, “I spoke like a child, I thought like a child, I reasoned like a child.” And, we may add, I looked like a child. Thus the appropriateness of my childishness was evident in my appearance. Yes, that was me as I was, but that is no longer me as I now am, and this critical difference was implicit in the evolution of my physical appearance, which signaled as much to all who saw me. No such signals are available to the self as it exists online.

Indeed, we might say that the self that exists online is in one important respect a very poor representation of the self precisely because of its tendency toward completeness of memory. Digital media, particularly social media platforms, condense the rich narrative of the self’s evolution over time into a chaotic and perpetual moment. We might think of it as the self stripped of its story.* In any case, suffice it to note that we find ourselves once more needing to compensate, with little success it would appear, for the absence of the body and the meaning it carries.

Lastly, thinking back to the obviously self-serving push in the last decade by social media companies like Facebook for users to maintain one online identity as a matter of integrity and authenticity, we may now see that demand as paradoxical at best. The algorithmically constituted identity built upon its archives of the self that the platforms impose upon us is a self we never have been nor ever will be. More likely, we will find that it is a self we find ourselves often chasing and sometimes fleeing.


You can subscribe to my newsletter, The Convivial Societyhere.

 

 

Devil’s Bargain

In his most recent newsletter, sociologist Mark Carrigan mused about the question “What does it mean to take Twitter seriously?”

My initial thought, upon reading the titular question, was that we take Twitter seriously when we reckon with its corrosive effects, both on public discourse and on our psyche (to say nothing of our souls).

That was not quite what Carrigan had in mind: “… what I’m really seeking is to take it seriously,” Carrigan explains, “using it as a form of intellectual production while avoiding the mindless distraction it can so easily give rise to.”

I’m tired and my cynicism is acting up just now, so my response was dismissive: “Good luck.” I grant, though, that the question deserves a bit more in response.

Carrigan uses Twitter’s ephemeral character has his foil and so notes that the fact that “people can use Twitter in ways which are far from ephemeral tells us little about how we can do this,” take it seriously as a site of intellectual production, that is.

He goes on to note that there’s a certain artfulness involved in condensing a serious thought into a brief statement, as in the tradition of the aphorism. And he correctly observes that it is not just a matter of Tweeting slowly, as the Slow Scholarship Manifesto would have it, but rather it is best understood as a matter of care.

Carrigan knows, of course, that there are real challenges involved. He is currently taking a break from Twitter, after a long stint managing the social media feed for an academic publication and scheduling 50+ tweets a day (the mere thought exhausts me). He understands that much of Twitter’s content is less than artful and serious. He understands that it can inculcate unfortunate habits. But on the whole he remains hopeful about the possibility of using Twitter meaningfully.

I don’t know. I just took some time off the platform myself, roughly three months or so. In truth, I continued to check in periodically to see if there were any interesting stories or essays circulating and simply refrained from tweeting anything out except links to a couple of pieces that I wrote during that time.

I’ve come back to the platform, if I am honest about it, mostly for the sake of getting my work a little more attention. I have to confess that Twitter has yielded some good relationships and opportunities over the past few years. And there’s a part of me that wants to keep that portal open. It’s just that on most days, I’m not sure it’s worth it.

With regards to Twitter, I’m a convinced McLuhanite: the medium is the message, which is to say that regardless of the well-intentioned uses to which we put it, the medium will, over time, have its effect on users, and most of those effects are toxic. And, I hasten to add, I think this would be the case even if Jack kicked off the Nazis, etc.

Alan Jacobs, whose work you all know I admire and whose opinion I value, remains resolute in his decision to abandon the platform for good:

But here’s why I keep saying it: The decision to be on Twitter (or Facebook, etc.) is not simply a personal choice. It has run-on effects for you but also for others. When you use the big social media platforms you contribute to their power and influence, and you deplete the energy and value of the open web. You make things worse for everyone. I truly believe that. Which is why I’m so obnoxiously repetitive on this point.

Just give it a try: suspend your Big Social Media accounts and devote some time to the open web, to a blog of your own — maybe to micro.blog as an easy, simple way in. Give it a try and see if you’re not happier. I know I am.

I don’t disagree, except to say that ditching the platform and going indie, as it were, works better (better, I grant, depends on your purposes) when you’ve already got a large audience that is going to follow you where ever you go or an established community (a convivial society, I’d dare say), online and off, with which to sustain your intellectual life. I’m pretty sure I don’t quite have the former, and I’ve struggled to find that latter, making my way as an independent scholar of sorts these last several years.

But again, this is not to say that Alan is wrong, only that my counting the cost is a more conflicted affair.

In any case, I can feel Twitter working on me as I’ve begun to use it more frequently of late and allowed myself to tweet as well as read. I can feel it working on me in much the same way that, in Tolkien’s world, the wearers of the Ring can feel it working on them. It leaves one feeling weary, thin, exposed, morally compromised, divided, etc., while deeply distorting one’s view of reality. And, as far as I’m concerned, there are no Tom Bombadils, immune to the ring’s power, among us in this case.

So, I don’t know, the present foray into Twitterland may be short-lived.

What I do know is that the newsletter is increasingly where I want to write and what I want to keep developing. It may be that even this fair site, which has served me well for nearly a decade, is entering its twilight. So, anyway, sign up, and, as your final act on Twitter, tell others to do so, too.

The Wonder Of What We Are

I recently caught a link to a brief video showing a robotic hand manipulating a cube. Here is a longer video from which the short clip was taken, and here is the article describing the technology that imbued this robotic hand with its “remarkable new dexterity.” MIT’s Technology Review tweeted a link with this short comment: “This robot spent the equivalent of a hundred years learning how to manipulate a cube in its hand.”

Watching the robotic hand turn the cube this way and that, I was reminded of those first few months of a child’s life when they, too, learn how to use their hands. I remembered how absurdly proud I felt as a new father watching my baby achieve her fine motor skill milestones. I’m not sure who was more delighted when, after several failed attempts, she finally picked up her first puff and successfully brought it to her mouth.

This, in turn, elicited a string of loosely related reflections.

I imagined the unlikely possibility that one unintended consequence of these emerging technologies might be renewed wonder at the marvel that is the human being.

After all, the most sophisticated tools we are currently capable of fashioning are only haltingly developing the basic motor skills that come naturally to a six-month-old child. And, of course, we have not even touched on the acquisition of language, the capacity for abstract thought, the mystery of consciousness, etc. We’re just talking about turning a small cube about.

It seemed, then, that somewhere along the way our wonder at what we can make appears to have displaced our wonder at what we are.

Ultimately, I don’t think I want to oppose these two realities. Part of the wonder of what we are is, indeed, that we are the sort of creatures who create technological marvels.

Perhaps there’s some sort of Aristotelian mean at which we ought to aim. It seems, at least, that if we marvel only at what we can make and not also at what we are, we set off on a path that leads ultimately toward misanthropic post-humanist fantasies.

Or, as Arendt warned, we would become “the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.”

It is odd that there is an impulse of sorts to create some of these marvels in our own image as it were, or that we seek to replicate not only our own capacities but even our physiology.

Yet, it is precisely this that also makes us anxious, fearful that we will be displaced or uncertain about our status in the great chain of being, to borrow an old formulation.

But our anxieties tend to be misplaced. More often than not, the real danger is not that our machines will eclipse us but that we will conform ourselves to the pattern of our machines.

In this way we are entranced by the work of our hands. It is an odd spin on the myth of Narcissus. We are captivated not by our physical appearance but by our ingenuity, by how we are reflected in our tools.

But this reflection is unfaithful, or, better, it is incomplete. It veils the fullness of the human person. It reduces our complexity. And perhaps in this way it reinforces the tendency to marvel only at what we can make by obscuring the full reality of what we are.

This full reality ultimately escapes our own (self-)understanding, which may explain why it is so tempting to traffic in truncated visions of the self. This creative self that has come to know so much of the world, principally through the tools it has fashioned, remains a mystery to itself.

We could do worse, then, than to wonder again at what we are:  the strangest phenomenon in the cosmos, as Walker Percy was fond of saying.


You can subscribe to my weekly newsletter, The Convivial Societyhere.