The Slippery Slope Is Greased By Risk Aversion

In a short post, Walter Russell Mead links to five stories under the heading of “Big Brother Goes Hi-Tech.” Click over for links to stories about face scanners in Japan, phone-monitoring in Iran, “infrared antiriot cameras” in China, and computer chips embedded in the uniforms of school children in Brazil.

The stories from Japan, China, and Iran may be the most significant (and disconcerting) in terms of the scope of the reach of the technologies in question, but it was the item from Brazil that caught my attention. Perhaps because it involved school children and we are generally more troubled by problematic developments that directly impact children. The story to which Mead linked appeared in the NY Times and it amounts to little more than a brief note. Here is the whole of it:

“Grade-school students in a northeastern Brazilian city are using uniforms embedded with computer chips that alert parents if they are cutting classes, the city’s education secretary, Coriolano Moraes, said Thursday. Twenty-thousand students in 25 of Vitória da Conquista’s 213 public schools started using T-shirts with chips this week, Mr. Moraes said. By 2013, all of the city’s 43,000 public school students will be using them, he added. The chips send a text message to the cellphones of parents when their children enter the school or alert the parents if their children have not arrived 20 minutes after classes have begun. The city government invested $670,000 in the project, Mr. Moraes said.”

So what to make of this? It differs from the technologies being deployed in China, Japan, and Iran in that it is being implemented in the light of day.

[Curious side note: I misspelled “implemented” in the sentence above and it was auto-corrected to read “implanted”. Perhaps the computers know something we don’t!]

On the face of it, there is nothing secretive about this program and I would be surprised if there was not some kind of opt out provision for parents. Also, from this short notice it is unclear whether the augmented T-shirts can be tracked or if they simply interact with a sensor on school premises and are inactive outside of school grounds. If the technology could be used to track the child’s location outside of school it would be more problematic.

Or perhaps it might be more attractive. The same impulse that would sanction these anti-truancy T-shirts, taken further along the path of its own logic, would also seem to sanction technology that tracks a child’s location at all times. It is all about safety and security, at least that is how it would be presented and justified. It would be the ultimate safe-guard against kidnapping. It would also solve, or greatly mitigate the problem of children wandering off on their own and finding themselves lost. Of course, clothing can be removed from one’s person, which to my mind opens up all kinds of flaw’s with the Brazilian program. How long will it take clever teenagers to figure out all sorts of ways to circumventing this technology, really?

Recalling the auto-correct hint, then, it would seem that the answer to this technology’s obvious design flaw would be to imbed the chips subcutaneously. We already do it with our pets. Wouldn’t it be far more tragic to lose a child than to lose a pet?

Now, seriously, how outlandish does this sound at this techno-social juncture we find ourselves in? Think about it. Is it just me or does it not seem as if we are passed the point where we would be shocked by the possibility of implanted chips. I’m sure there is a wide spectrum of opinion on such matters, but the enthusiasts are not exactly on the fringes.

Consider the dynamic that Thomas de Zengotita has labeled “Justin’s Helmet Principle.” Sure Justin looks ridiculous riding down the street with his training wheels on, more pads than a lineman, and a helmet that makes him look like Marvin the Martian, but do I want the burden of not decking out Justin in this baroque assemblage of safety equipment, have him fall, and seriously injure himself?  No probably not.  So on goes the safety crap.

Did we sense that there was something a little off when we started sending off our first graders to school with cell phones, just a fleeting moment of incongruity perhaps?  Maybe.  Did we dare risk not giving them the cell phone and have them get lost or worse without a way of getting help?  Nope.  So there goes Johnny with the cell phone.

And in the future we might add, did we think it disconcerting when we first started implanting chips in our children. Definitely, but did we want to risk having them be kidnapped or lost and not be able to find them? No, of course not.

The slippery slope is greased by the oil of risk aversion.

Thoughts?

Clever Blog Post Title

Reading about Jon Stewart’s recent D. C. rally, I noticed a picture of a participant wearing a “V for Vendetta” mask and carrying a sign.  It wasn’t the mask, but the words on the sign that caught my eye.  The sign read, “ANGRY SIGN.”

I wouldn’t have paid much attention to it had I not recently crossed paths with a guy wearing a T-shirt that read:  “Words on a T-shirt.”  Both of these then reminded me of the very entertaining, “Academy Award Winning Trailer” — a self-referential spoof of movie trailers embedded at the end of this post.  The “trailer” opens with a character saying, “A toast — establishing me as the wealthy, successful protagonist.”  And on it goes in that vein.  The comments section on the clip’s Youtube page likewise yielded a self-referential parody of typical Youtube comments.  Example:  “Quoting what I think is the funniest part of the video and adding “LOL” at the end of the comment.”

There is a long tradition of self-reference in the art world, Magritte’s pipe is classic example, but now it seems the self-referentiality that was a mark of the avant-garde is an established feature of popular culture.  Call it vulgar self-referentiality if you like.  Our sophistication, in fact, seems to be measured by the degree of our reflexivity and self-awareness — “I know, that I know, that I know, etc.” — which elides with a mood of ironic detachment.   Earnestness in this environment becomes a vice.  We are in a sense “mediating” or re-presenting our own selves through this layer of reflexivity, and we’re pretty sure everyone else is too.

On the surface perhaps, there is a certain affinity with the Socratic injunction, “Know thyself.”  But this is meta-Socrates, “Knowingly know thyself.”  At issue for some is whether there is  a subject there to know that would localize and contain the knowing, or whether in the absence of a subject all there is to know is a process of knowing, knowing itself.

Leaning on Thomas de Zengotita’s Mediated: How the Media Shapes Your World and the Way You Live in It and Hannah Arendt’s The Human Condition, here’s a grossly oversimplified thumbnail genealogy of our heightened self-consciousness.  On the heels of the Copernican revolution, we lost confidence in the ability of our senses to apprehend the world truthfully (my eyes tell me the world is stationary and the sun moves, turns out my eyes lied), but following Descartes we sought certainty in our inner subjective processes — “I think, therefore I am” and all that.  Philosophically then our attention turned to our own thinking — from the object out there, to the subject in here.

Sociologically, modernity has been marked by an erosion of cultural givens; very little attains a taken-for-granted status relative to the pre-modern and early modern world.  In contrast to the medieval peasant, consider how much of your life is not pre-determined by cultural necessity.  Modernity then is marked by the expansion of choice, and choice ultimately foregrounds the act of choosing which yields an attendant lack of certainty in the choice – I am always aware that I could have chosen otherwise.  In other words, identity grounded in the objective (reified) structures of traditional society is superseded by an identity which is the aggregate of the choices we make — choice stands in for fate, nature, providence, and all the rest.  Eventually an awareness of this process throws even the notion of the self into question; I could have chosen otherwise, thus I could be otherwise.  The self, as the story goes, is decentered.  And whether or not that is really the case, it certainly seems to be what we feel to be the case.

So the self-referentialism that marked the avant-garde and the theories of social constructivism that were controversial a generation ago are by now old news.  They are widely accepted by most people under 35 who didn’t pick it all up by visiting the art houses or reading Derrida and company, but by living with and through the material conditions of their society.

First the camera, then the sound recorder, the video recorder, and finally the digitization and consequent democratization of the means of production and distribution (cheap cameras/Youtube, etc.) created a society in which we all know that we know that we are enacting multiple roles and that no action yields a window to the self, if by self we mean some essential, unchanging core of identity.  Foucault’s surveillance society has produced a generation of method actors. In de Zengotita’s words,

Whatever the particulars, to the extent that you are mediated, your personality becomes an extensive and adaptable tool kit of postures . . . You become an elaborate apparatus of evolving shtick that you deploy improvisationally as circumstances warrant.  Because it is all so habitual, and because you are so busy, you can almost forget the underlying reflexivity.

Almost.

There seems to be a tragically hip quality to this kind of hyper-reflexivity, but it is also like looking into a mirror image of a mirror — we get mired in an infinite regress of our own consciousness.  We risk self-referential paralysis, individually and culturally, and we experience a perpetual crisis of identity.  My sense is that a good deal of our cynicism and apathy is also tied into these dynamics.  Not sure where we go from here, but this seems to be where we are.

Or maybe this is all just me.