Zygmunt Bauman (2008): “De Singly rightly suggests that in theorizing about present identities, the metaphors of ‘roots’ and ‘uprooting’ (or, let me add, the related trope of ‘disembedding’), all implying the one-off nature of the individual’s emancipation from the community of birth as well as the finality and irrevocability of the act, are better abandoned and replaced by the tropes of dropping and weighing an anchor.
“Unlike ‘uprooting’ and ‘disembedding,’ there is nothing irrevocable, let alone ultimate, in weighing anchor. While roots torn out of the soil in which they were growing are likely to desiccate and die, anchors are drawn up only to be dropped again elsewhere …. All in all, the anchor metaphor captures what the metaphor of ‘uprooting’ misses or keeps silent about: the intertwining of continuity and discontinuity in the history of all or at least a growing number of contemporary identities ….
“Paradoxically, emancipation of the self and its effective self-assertion need strong and demanding communities.”
Simone Weil (1949): “To be rooted is perhaps the most important and least recognized need of the human soul. It is one of the hardest to define. A human being has roots by virtue of his real, active, and natural participation in the life of a community, which preserves in living shape certain particular treasures of the past and certain expectations for the future. This participation is a natural one, in the sense that it is automatically brought about by place, conditions of birth, profession, and social surroundings. Every human being needs to have multiple roots. It is necessary for him to draw well-nigh the whole of his moral, intellectual, and spiritual life by way of the environment of which he forms a natural part.”
I want to pass on to you three pieces on what has come to be known as Big Data, a diverse set of practices enabled by the power of modern computing to accumulate and process massive amounts of data. The first piece, “View from Nowhere,” is by Nathan Jurgenson. Jurgenson argues that the aspirations attached to Big Data, particularly in the realm of human affairs, amounts to a revival of Positivism:
“The rationalist fantasy that enough data can be collected with the ‘right’ methodology to provide an objective and disinterested picture of reality is an old and familiar one: positivism. This is the understanding that the social world can be known and explained from a value-neutral, transcendent view from nowhere in particular.”
The second piece is an op-ed in the NY Times by Frank Pasquale, “The Dark Market for Personal Data.” Pasquale considers the risks to privacy associated with gathering and selling of personal information by companies equipped to mine and package such data. Pasquale concludes,
“We need regulation to help consumers recognize the perils of the new information landscape without being overwhelmed with data. The right to be notified about the use of one’s data and the right to challenge and correct errors is fundamental. Without these protections, we’ll continue to be judged by a big-data Star Chamber of unaccountable decision makers using questionable sources.”
Finally, here is a journal article, “Obscurity and Privacy,” by Evan Selinger and Woodrow Hartzog. Selinger and Hartzog offer obscurity as an explanatory concept to help clarify our thinking about the sorts of issues that usually get lumped together as matters of privacy. Privacy, however, may not be a sufficiently robust concept to meet the challenges posed by Big Data.
“Obscurity identifies some of the fundamental ways information can be obtained or kept out of reach, correctly interpreted or misunderstood. Appeals to obscurity can generate explanatory power, clarifying how advances in the sciences of data collection and analysis, innovation in domains related to information and communication technology, and changes to social norms can alter the privacy landscape and give rise to three core problems: 1) new breaches of etiquette, 2) new privacy interests, and 3) new privacy harms.”
In each of these areas, obscurity names the relative confidence individuals can have that the data trail they leave behind as a matter of course will not be readily accessible:
“When information is hard to understand, the only people who will grasp it are those with sufficient motivation to push past the layer of opacity protecting it. Sense-making processes of interpretation are required to understand what is communicated and, if applicable, whom the communications concerns. If the hermeneutic challenge is too steep, the person attempting to decipher the content can come to faulty conclusions, or grow frustrated and give up the detective work. In the latter case, effort becomes a deterrent, just like in instances where information is not readily available.”
Big Data practices have made it increasingly difficult to achieve this relative obscurity thus posing a novel set social and personal challenges. For example, the risks Pasquale identifies in his op-ed may be understood as risks that follow from a loss of obscurity. Read the whole piece for a better understanding of these challenges. In fact, be sure to read all three pieces. Jurgenson, Selinger, and Pasquale are among our most thoughtful guides in these matters.
Allow me to wrap this post up with a couple of additional observations. Returning to Jurgenson’s thesis about Big Data–that Big Data is a neo-Positivist ideology–I’m reminded that positivist sociology, or social physics, was premised on the assumption that the social realm operated in predictable law-like fashion, much as the natural world operated according to the Newtonian world picture. In other words, human action was, at root, rational and thus predictable. The early twentieth century profoundly challenged this confidence in human rationality. Think, for instance, of the carnage of the Great War and Freudianism. Suddenly, humanity seemed less rational and, consequently, the prospect of uncovering law-like principles of human society must have seemed far more implausible. Interestingly, this irrationality preserved our humanity, insofar as our humanity was understood to consist of an irreducible spontaneity, freedom, and unpredictability. In other words, so long as the Other against which our humanity was defined was the Machine.
If Big Data is neo-Positivist, and I think Jurgenson is certainly on to something with that characterization, it aims to transcend the earlier failure of Comteian Positivism. It acknowledges the irrationality of human behavior, but it construes it, paradoxically, as Predictable Irrationality. In other words, it suggests that we can know what we cannot understand. And this recalls Evgeny Morozov’s critical remarks in “Every Little Byte Counts,”
“The predictive models Tucker celebrates are good at telling us what could happen, but they cannot tell us why. As Tucker himself acknowledges, we can learn that some people are more prone to having flat tires and, by analyzing heaps of data, we can even identify who they are — which might be enough to prevent an accident — but the exact reasons defy us.
Such aversion to understanding causality has a political cost. To apply such logic to more consequential problems — health, education, crime — could bias us into thinking that our problems stem from our own poor choices. This is not very surprising, given that the self-tracking gadget in our hands can only nudge us to change our behavior, not reform society at large. But surely many of the problems that plague our health and educational systems stem from the failures of institutions, not just individuals.”
It also suggests that some of the anxieties associated with Big Data may not be unlike those occasioned by the earlier positivism–they are anxieties about our humanity. If we buy into the story Big Data tells about itself, then it threatens, finally, to make our actions scrutable and predictable, suggesting that we are not as free, independent, spontaneous, or unique as we might imagine ourselves to be.
It’s not uncommon to hear someone say that they were haunted by an image, often an old photograph. It is a figurative and evocative expression. To say that an image is haunting is to say that the image has lodged itself in the mind like a ghost might stubbornly take up residence in a house, or that it has somehow gotten a hold of the imagination and in the imagination lives on as a spectral after-image. When we speak of images of the deceased, of course, the language of haunting approaches its literal meaning. In these photographs, the dead enjoy an afterlife in the imagination.
I’ve lately been haunted myself by one such photograph. It is a well-known image of Lewis Powell, the man hung for his failed attempt to assassinate Secretary of State William Seward. On the same night that John Wilkes Booth murdered the president, Powell was to kill the secretary of state and their co-conspirator, George Atzerodt, was to kill Vice-President Andrew Johnson. Atzerodt failed to attempt the assassination altogether. Powell followed through, and, although Seward survived, he inflicted tremendous suffering on the Seward household.
I came upon the haunting image of Powell in a series of recently colorized Civil War photographs, and I was immediately captivated by the apparent modernity of the image. Nineteenth century photographs tend to have a distinct feel, one that clearly announces the distant “pastness” of what they have captured. That they are ordinarily black-and-white only partially explains this effect. More significantly, the effect is communicated by the look of the people in the photographs. It’s not the look of their physical appearance, though; rather, it’s the “look” of their personality.
There is distinct subjectivity—or, perhaps, lack thereof—that emerges from these old photographs. There is something in the eyes that suggests a way of being in the world that is foreign and impenetrable. The camera is itself a double cause of this dissonance. First, the subjects seem unsure of how to position themselves before the camera; they are still unsettled, it seems, by the photographic technique. They seem to be wrestling with the camera’s gaze. They are too aware of it. It has rendered them objects, and they’ve not yet managed to negotiate the terms under which they may recover their status as subjects in their own right. In short, they had not yet grown comfortable playing themselves before the camera, with the self-alienated stance that such performance entails.
But then there is this image of Powell, which looks as if it could have been taken yesterday and posted on Instagram. The gap in consciousness seems entirely closed. The “pastness” is eclipsed. Was this merely a result of his clean-shaven, youthful air? Was it the temporal ambiguity of his clothing or of the way he wore his hair? Or was Powell on to something that his contemporaries had not yet grasped? Did he hold some clue about the evolution of modern consciousness? I went in search of an answer, and I found that the first person I turned to had been there already.
Death on Film
Roland Barthes’ discussion of death and photography in Camera Lucida: Reflections on Photography has achieved canonical status, and I turned to his analysis in order to shed light on my experience of this particular image that was so weighted with death. I soon discovered that an image of Powell appears in Camera Lucida. It is not the same image that grabbed my attention, but a similar photograph taken at the same time. In this photograph, Powell is looking at the camera, the manacles that bind his hands are visible, but still the modernity of expression persists.
Barthes was taken by the way that a photograph suggests both the “that-has-been” and the “this-will-die” aspects of a photographic subject. His most famous discussion of this dual gesture involved a photograph of his mother, which does not appear in the book. But a shot of Powell is used to illustrate a very similar point. It is captioned, “He is dead, and he is going to die …” The photograph simultaneously witnesses to three related realities. Powell was; he is no more; and, in the moment captured by this photograph, he is on his way to death.
Barthes also borrowed two Latin words for his analysis: studium and punctum. The studium of a photograph is its ostensible subject matter and what we might imagine the photographer seeks to convey through the photograph. The punctum by contrast is the aspect that “pricks” or “wounds” the viewer. The experience of the punctum is wholly subjective. It is the aspect that disturbs the studium and jars the viewer. Regarding the Powell photograph, Barthes writes,
“The photograph is handsome, as is the boy: that is the studium. But the punctum is: he is going to die. I read at the same time: this will be and this has been; I observe with horror an anterior future of which death is the stake. By giving me the absolute past of the pose, the photograph tells me death in the future. What pricks me is the discovery of this equivalence.”
In my own experience, the studium was already the awareness of Powell’s impending death. The punctum was the modernity of Powell’s subjectivity. Still eager to account for the photograph’s effect, I turned from Barthes to historical sources that might shed light on the photographs.
The Gardner Photographs
The night of the assassination attempt, Powell entered the Seward residence claiming that he was asked to deliver medicine for Seward. When Seward’s son, Frederick, told Powell that he would take the medicine to his father, Powell handed it over, started to walk away, but then wheeled on Frederick and put a gun to his head. The gun misfired and Powell proceeded to beat Frederick over the head with it. He did so with sufficient force to crack Frederick’s skull and jam the gun.
Powell then pushed Seward’s daughter out of the way as he burst into the secretary of state’s room. He leapt onto Seward’s bed and repeatedly slashed at Seward with a knife. Seward was likely saved by an apparatus he was wearing to correct an injury to his jaw sustained days earlier. The apparatus deflected Powell’s blows from Seward’s jugular. Powell then wounded two other men, including another of Seward’s sons, as they attempted to pull him off of Seward. As he fled down the stairs, Powell also stabbed a messenger who had just arrived. Like everyone else who was wounded that evening, the messenger survived, but he was paralyzed for life.
Powell then rushed outside to discover that a panicky co-conspirator who was to help him make his getaway had abandoned him. Over the course of three days, Powell then made his way to a boardinghouse owned by Mary Surratt where Booth and his circle had plotted the assassinations. He arrived, however, just as Surratt was being questioned, and, not providing a very convincing account of himself, he was taken into custody. Shortly thereafter, Powell was picked out of a lineup by one of Seward’s servants and taken aboard the ironclad USS Saugus to await his trial.
It was aboard the Saugus that Powell was photographed by Alexander Gardner, a Scot who had made his way to America to work with Matthew Brady. According to Powell’s biographer, Betty Ownsbey, Powell resisted having his picture taken by vigorously shaking his head when Gardner prepared to take a photograph. Given the exposure time, this would have blurred his face beyond recognition. Annoyed by Powell’s antics, H. H. Wells, the officer in charge of the photo shoot, struck Powell’s arm with the side of his sword. At this, Major Eckert, an assistant to the secretary of war who was there to interrogate Powell, interposed and reprimanded Wells.
Powell then seems to have resigned himself to being photographed, and Gardner proceeded to take several shots of Powell. Gardner must have realized that he had something unique in these exposures because he went on to copyright six images of Powell. He didn’t bother to do so with any of the other pictures he took of the conspirators. Historian James Swanson explains:
“[Gardner’s] images of the other conspirators are routine portraits bound by the conventions of nineteenth century photography. In his images of Powell, however, Gardner achieved something more. In one startling and powerful view, Powell leans back against a gun turret, relaxes his body, and gazes languidly at the viewer. There is a directness and modernity in Gardner’s Powell suite unseen in the other photographs.”
My intuition was re-affirmed, but the question remained: What accounted for the modernity of these photographs?
Resisting the Camera’s Gaze
Ownsbey’s account of the photo shoot contained an important clue: Powell’s subversive tactics. Powell clearly intuited something about his position before the camera that he didn’t like. He attempted one form of overt resistance, but appears to have decided that this choice was untenable. He then seems to acquiesce. But what if he wasn’t acquiescing? What if the modernity that radiates from these pictures arises out of Powell’s continued resistance by other means?
Powell could not avoid the gaze of the camera, but he could practice a studied indifference to it. In order to resist the gaze, he would carry on as if there were no gaze. To ward off the objectifying power of the camera, he had to play himself before the camera. Simply being himself was out of the question; the observer effect created by the camera’s presence so heightened one’s self-consciousness that it was no longer possible to simply be. Simply being assumed self-forgetfulness. The camera does not allow us to forget ourselves. In fact, as with all technologies of self-documentation, it heightens self-consciousness. In order to appear indifferent to the camera, Powell had to perform the part of Lewis Powell as Lewis Powell would appear were there no camera present.
In doing so, Powell stumbled upon the negotiated settlement with the gaze of the camera that eluded his contemporaries. He was a pioneer of subjectivity. Before the camera, many of his contemporaries either stared blankly, giving the impression of total vacuity, or else they played a role–the role of the brave soldier, or the statesman, or the lover, etc. Powell found another way. He played himself. There was nothing new about playing a role, of course. But playing yourself, that seems a watershed of consciousness. Playing a role entails a deliberate putting on of certain affectations; playing yourself suggests that there is nothing to the self but affectations. The anchor of identity in self-forgetfulness is lifted and the self is set adrift. Perhaps the violence that Powell had witnessed and perpetrated prepared him for this work against his psyche.
If indeed this was Powell’s mode of resistance, it was Pyrrhic: ultimately it entailed an even more profound surrender of subjectivity. It internalized the objectification of the self which the external the external presence of the camera elicited. This is what gave Powell’s photographs their eerie modernity. They were haunted by the future, not the past. It wasn’t Powell’s imminent death that made them uncanny; it was the glimpse of our own fractured subjectivity. Powell’s struggle before the camera, then, becomes a parable of human subjectivity in the age of pervasive documentation. We have learned to play ourselves with ease, and not only before the camera. The camera is now irrelevant.
In the short time that was left to him after the Gardner photographs were taken, Powell went on to become a minor celebrity. He was, according to Swanson, the star attraction at the trial of Booth’s co-conspirators. Powell “fascinated the press, the public, and his own guards.” He was, in the words of a contemporary account, “the observed of all observers, as he sat motionless and imperturbed, defiantly returning each gaze at his face and person.” But the performance had its limits. Although Ownsbey has raised reasonable doubts about the claim, it was widely reported that Powell had attempted suicide by repeatedly pounding his head against a wall.
On July 7, 1865, a little over two months since the Gardner photographs, Powell was hanged with three of his co-conspirators. It doesn’t require Barthes’ critical powers to realize that death saturates the Powell photographs, but death figured only incidentally in the reading I’ve offered here. It is not, however, irrelevant that this foray into modern consciousness was undertaken under the shadow of death. It is death, perhaps, that gave Powell’s performance its urgency. And perhaps it is now death that serves as the last lone anchor of the self.
I no longer post to this blog, but I continue to write The Convivial Society, a newsletter about tech and culture. You can sign up here.
Since writing Friday’s post and benefiting from the subsequent exchange with Nathan Jurgenson I’ve come across a handful of items that have kept me thinking about identity and authenticity.
I’m still thinking, but I thought I’d gather some of these items here and, in Linda Richman fashion, invite you to “discuss among yourselves.”
In a blog post at The American Scholar, William Deresiewicz makes a number of an interesting observations:
“Genuine, vintage, authentic: these are the words that signify spiritual value now for us, and constitute the tokens of our status competition. We hunger for the real to fill us up, and by the real we mean the old or the traditional: anything that isn’t us. The highest praise we can give that lamp or sideboard is that it looks like the kind of thing that’s been in someone’s family for generations, and that’s exactly the illusion that we pay for those objects to give us: the illusion of lineage, continuity, rootedness, memory. Modernity is constant movement, within lives and between generations, a constant shedding and forgetting. We value things that give us the sense of being embedded in space and time, even if we have to buy someone else’s memories, or visit other people’s histories, to get it.”
“Buddy, if it’s a choice, it’s not an identity. Identity is not a suit of clothes you take on and off. It’s a skin; it sticks to you whether you like it or not. It’s what other people call you—people with the same identity, people with different ones—not what you decide to consider yourself. History gives it to you, not some kind of “search.” But identity now has become a matter not of belonging or community, both of which are gone, but of, precisely, authenticity.”
“So what’s the answer? Just assent to your life. You’re middle class? You’re white? You’re Western? So what? That’s just as real as anything else. ‘We seek other conditions,’ Montaigne said, ‘because we do not understand the use of our own.'”
Regarding community and tradition, consider the following observation by W. H. Auden:
“The old pre-industrial community and culture are gone and cannot be brought back. Nor is it desirable that they should be. They were too unjust, too squalid, and too custom-bound. Virtues which were once nursed unconsciously by the forces of nature must now be recovered and fostered by a deliberate effort of the will and the intelligence. In the future, societies will not grow of themselves. They will be either made consciously or decay.”
Is the question of authenticity correlated to the “deliberate effort” and conscious making Auden calls for?
Thanks to Alan Jacobs for both of those. And thanks to Rob Horning for the following from Simon Reynolds:
“You don’t have to be an antiquated Romantic or old-fashioned early 20th-century-style Modernist to find this input/output version of creativity unappealing. Surely the artist or writer is more than just a switch for the relay of information flows, the cross-referencing of sources and coordinates? What is missed out in the recreativity model is the body: the artist as a physical being, someone whose life and personal history has left them marked with a singular set of desires and aversions. There is also the little matter of will: bubbling up from within, that profoundly inegalitarian drive to stand out, to assert oneself in the face of anonymity and death. It’s this aspect of embodiment and ego that gets downgraded in digital culture, which tends to reduce us to the textual: a receiver/transmitter of data, a node in the network.”
Is the question of authenticity the revenge of the supposedly de-centered self?
Speaking of Romantics, or their near kin, here is Emerson:
“A boy is in the parlour what the pit is in the playhouse; independent, irresponsible, looking out from his corner on such people and facts as pass by, he tries and sentences them on their merits, in the swift, summary way of boys, as good, bad, interesting, silly, eloquent, troublesome. He cumbers himself never about consequences, about interests: he gives an independent, genuine verdict. You must court him: he does not court you. But the man is, as it were, clapped into jail by his consciousness. As soon as he has once acted or spoken with eclat, he is a committed person, watched by the sympathy or the hatred of hundreds, whose affections must now enter into his account. There is no Lethe for this.”
Is authenticity just another word for the lost innocence of childhood?
Chris Poole is among the 21 New Media Innovators recently profiled by New York Magazine. They bestowed upon him the title of “Meme Generator” and provided this short bio:
Chris Poole (handle: “moot”) founded the anonymous message board 4Chan when he was just 15. It’s grown into the breeding ground for some of the web’s most pervasive memes, as well as some of its more ominous movements. In the last two years, Poole has raised more than $3 million in venture funding for a new image-centric site called Canvas, which is similar to but separate from 4chan, and he’s become an advocate for web privacy. At this year’s South by Southwest conference, for instance, he had this to say: “Zuckerberg’s totally wrong on anonymity being total cowardice. Anonymity is authenticity.”
It was that last line that caught my attention. I’ve lately been wrestling with the relative virtues and vices of anonymity and personalization. A while ago I argued that Zuckerberg was indeed wrong about identity, and self-servingly so. I wasn’t interested in defending anonymity in that case, but rather the more natural gradual and contingent self-disclosure that characterizes ordinary human relationships.
In the past several days I’ve returned to the theme arguing first that Google+ through its Circles attempts to address Facebook’s “all friends are equal” model. Then I suggested that the trend toward personalization and away from the anonymity of the early web better (but, imperfectly) fitted our social impulse and operated to some degree as what sociologists call a mediating structure. I followed that up with a post commenting on Morozov’s lament for the loss of the early Internet’s communal character which, I suggested, sat uncomfortably with Morozov’s privileging of privacy over personalization. Human communities don’t ordinarily function on those terms. Oddly, I argued in the comment thread of that same post for the desirability of anonymity in some instances.
So now I come across Poole’s claim — “anonymity is authenticity” — and feel primed to comment. My initial response is this: If anonymity is authenticity, then it is a Pyrrhic authenticity.
There is a certain plausibility to Poole’s claim, it suggests that we are most ourselves when we know that we will not be made to answer for what we are doing or saying; that the public self is a restrained and inhibited, and thus not authentic, version of the true self. It tracks with the point of the Ring of Gyges story in Plato’s Republic. The ring made the owner invisible, and so the argument went, revealed the true character of the owner (or better, the superficiality of virtue).
But if it is a plausible account of the human condition, it is also an incomplete one. It is a Pyrrhic authenticity because it eliminates the possibility of appearing authentically before others who acknowledge our presence. It thus suggests that we are most ourselves at the point at which it does not really matter what we are. Now one may adopt a Simon & Garfunkle, “I am a rock, I am an island” attitude at this juncture and insist that they don’t need the acknowledgement and recognition, to say nothing of the love and care, that comes from human relationships; if so, then this post is probably not the place to argue otherwise. I’m going to count on the fact that most of us will rather resonate with Hannah Arendt when she writes, “To live an entirely private life means above all to be deprived of things essential to a truly human life: to be deprived of the reality that comes from being seen and heard by others.”
What good is it to finally be myself, if I am myself alone? Granted there may be some extraordinary circumstances when, in fact, remaining authentically oneself simultaneously requires a great solitude. Ordinarily, however, we answer to both a need to cultivate the inner self with a measure of independence, and to find fulfillment in meaningful relationships with and among others.
Even I feel that over the course of these posts I have been trying to hit a moving target. This reflects the complexity of lived human experience. It is a complexity that is hard to account for online when platforms and interfaces seek to reduce that complexity to either the indiscriminately social or the misanthropically private. We are complex creatures and the great danger is that we will end up reducing our complexity to fit the constraints of life mediated through one platform or another.