Flame Wars, Punctuation Cartoons, and Net-heads: The Internet in 1988

I remember having, in the late 1980s, an old PC with a monochrome monitor, amber on black, that was hooked up to the Internet through Prodigy. I was on the Internet, but I had no idea what the Internet was. All I knew was that I could now get near-realtime updates on the score of Mets games.

Back in November, the Washington Post’s tech page posted the text of an article from the newspaper covering the Internet in 1988, right about the time I was messing around on Prodigy:  “Here’s how The Post covered the ‘grand social experiment’ of the Internet in 1988.” It’s a fascinating portal into what feels like another world, sort of. I usually think of the modern Internet experience first taking shape with the World Wide Web in the early and mid-90s. Reading this article, I was struck by how early many of the contours of the web as we know it, and the language we use to describe, began to appear. To be sure, there are also striking discontinuities, but the Internet in 1988 — before AOL, WWW, the dot-com bubble, Web 2.0, social, mobile, and all that — exhibited characteristics that will be readily recognizable. 

Consider the early example of “crowd-sourcing” or collective intelligence with which the author opens the article:

Kreisel was looking for an efficient way to paint patterns inside computer-drawn shapes. Paul Heckbert, a graduate student in California, did not know Kreisel, but he had a pretty good piece of code, or computer programming, to do the job. He dispatched it to Kreisel’s machine, and seconds later the New Yorker’s problem was solved.

Of course, the snake was already in the garden. The article is, in fact, occasioned by “a rogue program, known as a ‘virus,'” designed by a student at Cornell that “did not destroy any data but clogged computers and wasted millions of dollars’ worth of skilled labor and computer time.”

The virus, we’re told, “frightens many network visionaries, who dream of a ‘worldnet’ with ever more extensive connections and ever fewer barriers to the exchange of knowledge.” (Cyber-utopianism? Check. Of course, the roots of cyber-utopianism go back further still than 1988.) According to a Harvard astrophysicist, the Internet is a “community far more than a network of computers and cables.” “When your neighbors become paranoid of one another,” he added, “they no longer cooperate, they no longer share things with each other. It takes only a very, very few vandals to … destroy the trust that glues our community together.”

The scale clearly appeared massive, but today it seems quaint: “Together the news groups produce about 4 million characters of new material a day, the equivalent of about five average books.” But don’t worry about trying to keep up with it all, “90 percent of it is complete and utter trash,” at least as far as that astrophysicist was concerned. Hard to imagine that he was far off the mark (or that the ratio has shifted too much in the ensuing years).

At the time, “thousands of men and women in 17 countries swap recipes and woodworking tips, debate politics, religion and antique cars, form friendships and even fall in love.” Honestly, that’s not a bad sample of the sorts of things we’re still doing on the Internet. In part, this is because it is not a bad sample of what human beings do generally, so it’s what we end up doing in whatever social spaces we end up creating with our tools.

And when human beings find themselves interacting in new contexts created by new communication technologies, there are bound to be what we might kindly call “issues” while norms and conventions adapt to account for the new techno-social configuration. So already in 1988, we read that the Internet “has evolved its own language, social norms and “netiquette.'” More than twenty years hence, that project is still ongoing.

The author focused chiefly on the tone of discourse on Internet forums and, rightly I think, attributed its volatility to the characteristics of the medium:

Wouk’s riposte is a good example of why arguments sometimes intensify into bitter feuds known as flame wars, after the tendency of one character in Marvel Comics’ Fantastic Four to burst into flames (“Flame on!” he shouts) when he is angry. Is Wouk truly angry or just having a good time? Because the written word conveys no tone of voice, it isn’t always easy to tell.

“In a normal social setting,” said Chuq von Rospach of Sun Microsystems, “chances are the two of them would have a giggle over it and it would go away. On a network it tends to escalate. The feedback mechanisms that tell you to back off, or tell you that this person’s joking, aren’t there. The words are basically lifeless.”

Not only would I have failed to guess that flame wars was already in use in 1988, I had no idea of its etymology. But the author doesn’t use emoticon when describing their use:  “True net-heads sometimes resort to punctuation cartoons to get around the absence of inflection. They may append a :-) if they are making a joke (turn your head to the left) or use :-( for an ersatz frown.”

“Net-heads” appears not to have caught on, but what the author calls “punctuation cartoons” certainly have. (What we now call emoticons, by the way, first appeared in the late 19th century.)

Finally, lest the Internet of 1988 appear all to familiar, here’s one glaring point of discontinuity on which to close: “The one unbending rule is that thou shalt not post commercial announcements. It isn’t written anywhere, but heaven help the user who tries to broadcast an advertisement.”

And for good measure, here’s another snippet of the not-too-distant past (1994) that nonetheless manages to feel as if it were ages ago. “Allison, can you explain what Internet is?”

Recent Readings

This past Friday I took the first of my comprehensive exams. I think it went well, well enough anyway. If my committee agrees, I’ll have two more to go, which I’ll be taking within the next three months or so.

The first exam was over selections from my program’s core list of readings. The list features a number of authors that I’ve mentioned before including Walter Ong, Lev Manovich, Jerome McGann, N. Katherine Hayles, and Gregory Ulmer. There were also a number of “classic” theorists as well: Benjamin, Barthes, Foucault, Baudrillard.

Additionally, there were a few titles that were new to me or that I had never gotten around to reading. I thought it would be worthwhile to briefly note a few of these.

Daniel Headrick’s When Information Came of Age: Technologies of Knowledge in the Age of Reason and Revolution tells the story of a number of information systems — for classifying, storing, transforming, and transmitting information — that preceded the advent of what we ordinarily think of as the digital information revolution.

I finally read one of Donald Norman’s books, Living with Complexity. Hands down the easiest read on the list. Engaging and enlightening on design of everyday objects and experiences.

From Papyrus to Hypertext: Toward the Universal Digital Library, a translated work by French scholar Christian Vandendorpe, is laid out as a series of short reflections on the history of texts and reading. It was originally published more than a decade ago, so it is a little dated. Nonetheless, I found it useful.

The most important title that I encountered was Bruno Latour’s Reassembling the Social: An Introduction to Actor-Network-Theory. It’s a shame that it has taken me so long to finally read something from Latour. I heartily recommend it. I’ll be giving it another read soon and may have something to say about it in the future, but for now I’ll simply note that I found it quite enlightening.

Historian Thomas Misa gives a fine account of the entanglement of technology and Western culture in Leonardo to the Internet: Technology and Culture from the Renaissance to the Present.

Then there was this 2010 blogpost by Ian Bogost, which was anthologized in Debates in the Digital Humanities“The Turtlenecked Hairshirt: Fetid and Fragrant Futures for the Humanities.” It is, how shall I put it, bracing.

Finally, this wasn’t part of my readings for the exam, but I did stumble upon a 1971 review of Foucault’s The Order of Things: An Archaeology of the Human Sciences by George Steiner: “The Mandarin of the Hour-Michel Foucault.” The review is not quite so dismissive as the title might suggest.

There you have it, that’s what I have to show for the past couple of month’s reading. As the semester winds down and grades get turned in, who knows but some new blog posts may appear.

On the xkcd Philosophy of Technology, Briefly

Before I say anything else, let me say this: I get where this is coming from, really, I do.

simple_answers.xkcd

This is a recent xkcd comic that you may have seen in the past day or two. It makes a valid point, sort of. The point that I’d say is worth taking from this comic is this:  both unbridled enthusiasm and apocalyptic fear of any new technology are probably misguided. Some people need to hear that.

That said, this comic is similar in spirt to an earlier xkcd offering that Alan Jacobs rightfully picked apart with some searching questions. Although it is more pronounced in the earlier piece, both exhibit a certain cavalier “nothing-to-see-here-let’s-move-it-along” attitude with regards to technological change. In fact, one might be tempted to conclude that according to the xkcd philosophy of technology, technology changes nothing at all. Or, worse yet, that with technology it is always  que sera, sera  and we do well to stop worrying and enjoy the ride wherever it may lead.

In truth, I might’ve let the whole thing go without comment were it not for that last entry on the chart. It’s a variation of a recurring, usually vacuous rhetorical move that latches on to any form of continuity in order to dismiss concerns, criticism, etc. Since we have always already been X, this new form of X is inconsequential. It is as if reality were an undifferentiated, uncaused monistic affair in which the consequences of every apparent change are always contained and neutralized by its antecedents. But, of course, the fact that human beings have always died does not suggest to anyone in their right mind that we should never investigate the causes of particular deaths so as to prevent them when it is reasonable and possible to do so.

Similarly, pointing out that human beings have always used technology is perhaps the least interesting observation one could make about the relationship between human beings and any given technology. Continuity of this sort is the ground against which figures of discontinuity appear, and it is the figures that are of interest. Alienation may be a fact of life (or maybe that is simply the story that moderns tell themselves  to bear it), but it has been so to greater and lesser extents and for a host of different reasons. Pointing out that we have always been alienated is, consequently, the least interesting and the least helpful thing one could say about the matter.


If you’ve appreciated what you’ve read, consider supporting the writer.

Google Glass: Technology as Symbol

[Note: This post first appeared on Medium in July. At the time, I mentioned it on this blog and provided a link. I’m now republishing the post here in its entirety.] 

Sometimes a cigar is just a cigar, Freud reportedly quipped, and sometimes technology is just a tool. But sometimes it becomes something more. Sometimes technology takes on symbolic, or even religious significance.

In 1900, Paris welcomed the new century by hosting the Exposition Universelle. It was, like other expostions and worlds’ fairs before it, a showcase of both cultural achievement and technological innovation. One of the most popular exhibits at the Exposition was the Palace of Electricity which displayed a series of massive dynamos powering all the other exhibition halls.Among the millions of visitors that came through the Palace of Electricity, there was an American, the historian and cultural critic Henry Adams, who published a memorable account of his experience. Adams was awestruck by the whirling dynamos and, perhaps because he had recently visited the cathedral at Chartres, he drew an evocative comparison between the dynamo and the power of the Virgin in medieval society. Speaking in the third person, Adams wrote,

As he grew accustomed to the great gallery of machines, he began to feel the forty-foot dynamos as a moral force, much as the early Christians felt the Cross. The planet itself seemed less impressive, in its old-fashioned, deliberate, annual or daily revolution, than this huge wheel, revolving within arm’s length at some vertiginous speed, and barely murmuring — scarcely humming an audible warning to stand a hair’s-breadth further for respect of power — while it would not wake the baby lying close against its frame. Before the end, one began to pray to it; inherited instinct taught the natural expression of man before silent and infinite force.

Writing in the early 1970s, Harvey Cox revisited Adams’ meditation on the Virgin and the Dynamo and concluded that Adams saw “what so many commentators on technology since then have missed. He saw that the dynamo … was not only a forty-foot tool man could use to help him on his way, it was also a forty-foot-high symbol of where he wanted to go.”

Cox looked around American society in the early 70s, and wondered how Adams would read the symbolic value of the automobile, the jet plane, the hydrogen bomb, or a space capsule. These too had become symbols of the age and they invited a semiology of the “symbolism of technology.”

“Technological artifacts become symbols,” Cox wrote, “when they are ‘iconized,’ when they release emotions incommensurate with their mere utility, when they arouse hopes and fears only indirectly related to their use, when they begin to provide elements for the mapping of cognitive experience.”

Take the airplane, for example. In his classic study, The Winged Gospel: America’s Romance with Aviation, Joseph Corn summarized a remarkable article that appeared in 1916 about the future of flight. In it, the author predicted trans-oceanic flights by 1930 and, by 1970,the emergence of “traffic rules of the air” necessitated by the heavy volume of airplane traffic. Then the timeline leaps forward to the year 3000. At this point “superhumans” would’ve evolved and they would “live in the upper stratas of the atmosphere and never come down to earth at all.” By the year 10000, “two distinct types of human beings” would have appeared: “Alti-man” and “ground man.” Alti-man would live his entire life in the sky and would have no body, he would be an “ethereal” being that would “swim” in the sky like we swim in the ocean.

As Corn put it, “Alti-man was nothing if not a god. He epitomized the winged gospel’s greatest hope: mere mortals, mounted on self-made mechanical wings, might fly free of all earthly contraints and become angelic and divine.”

This may all sound tremendously hoaky to our ears, but Corn’s book is full of only slightly less implausible aspirations that attached themselves to the airplane throughout the early and mid-twentieth century. And it wasn’t just the airplane. In American Technological Sublime, historian David Nye chronicled the near-religious reverence and ritual that gathered around the railroad, the first skyscrapers, the electrified cityscape, the Hoover Dam, atomic bombs, and Saturn V rockets.

Taking an even broader historical perspective, the late David Noble argued that the modern technological project has always been shot through with religious and quasi-spiritual aspirations. He traced what he called the “religion of technology” back from the late medieval era through pioneering early modern scientists to artificial intelligence and genetic engineering.

The symbolism of technology, however, does not always crystalize society’s hopes and ambitions. To borrow Cox’s phrasing, it does not always embody where we want to go. Sometimes it is a symbol of fears and anxieties. In The Machine in the Garden, for instance, Leo Marx meticulously detailed how the locomotive became a symbol that collected the fears and anxieties generated by the industrial revolution in nineteenth century America.

As late as 1901, long since the railroad had become an ordinary aspect of American life, novelist Frank Norris describes it in The Octopus as “a terror of steam and steal,” a “symbol of vast power, huge, terrible, flinging the echo of its thunder, over all the reaches of the valley,” and a “leviathan, with tentacles of steel clutching into the soil, the soulless Force, the iron-hearted Power, the monster, the Collussus, the Octopus.”

The sublime experience accompanying the atomic bomb also inspired fear and trepidation. This response was most famously put into words by Rober Oppenheimer when, after the detonation of the first atomic bomb, he quoted the Bhagavad Gita: “I am become Death, the destroyer of worlds.”

This duality is not surprising given what we know about religious symbols generally. Drawing on sociologist Emile Durkheim, Cox noted that sacred symbols “are characterized by a high degree of power and ofambiguity. They arouse dread and gratitude, terror and rapture. The more central and powerful a symbol is for a culture the more vivid the ambiguity becomes.” The symbolism of technology shares this interplay between power and ambiguity. Our most powerful technologies both promise salvation and threaten destruction.

So what are the symbolic technologies of our time? The recent farewell tour by the space shuttle fleet evoked something approaching Nye’s technological sublime, and so too did Curiosity’s successful Mars landing. Neither of these, however, seem to rise to the level of technological symbolism described by Cox. They are momentarily awe-inspiring, but they are not quite symbols. The Singularity movement certainly does contain strong strands of Noble’s “religion of technology,” and it explicitly promises one of humanity’s long sought after dreams, immortality. But the movement’s ambitions do not easily coalesce around one particular technology or artifact that could collect its force into a symbol.

Here’s my candidate: Google Glass.

I can’t think of another recent technology whose roll-out has occasioned such a strong and visceral backlash. You need only scroll through a few months worth of posts at Stop the Cyborgs to get a feel for how all manner of fears and anxieties have gathered around Glass. Here are some recent post titles:

Google Won’t Allow Face Recognition on Glass Yet

Überveillance | Think of it as big brother on the inside looking out

Consent is not enough: Privacy Self-Management and the Consent Dilemmas

Stigmatised | Face recognition as human branding

The World of Flawed Data and Killer Algorithms is Upon Us…

Google Glass; Making Life Efficient Through Privacy Invasion

Glass has appeared at a moment already fraught with anxiety about privacy, and that was the case even before recent revelations about the extent and ubiquity of NSA surveillance. In other words, just when the fear of being watched, monitored, or otherwise documented has swelled, along comes a new technology in the shape of glasses, our most recognizable ocular technology, that aptly serves as an iconic representation of those fears. If our phones and browsers are windows into our lives, Glass threatens to make our gaze and the gaze of the watchers one and the same.

But remember the dual nature of potent symbols: we have other fears to which Glass may present itself as a remedy. We fear missing out on what transpires online, and Glass promises to bring the Internet right in front of our eyes so we will never have to miss anything again. We fear experiences may pass by without our documenting them, and Glass promises the power to document our experience pervasively. If we fear being watched, Glass at least allows us to feel as if we can join the watchers. And behind these particular fears are more primal, longstanding fears: the fear of loneliness and isolation, the fear of death, the fear of insecurity and vulnerability. Glass answers to these as well.

Interestingly, the website I cited earlier was not called, “Stop Google Glass”; it was called, “Stop the Cyborgs.” Perhaps Google Glass is the icon the Singularity project has been looking for. Glass is not quite an implant, but something about its proximity to the body or about how it promises to fade from view and become the interface through which our consciousness experiences reality … something about it suggests the blurring of the line between human and machine. Perhaps that is the greatest fear and highest aspiration of our age. The fears of those who would preserve humanity as they know it, and the aspirations of those who are prepared, as they see it, to trascend humanity are embodied in Glass.

Long before he visited the Exposition Universelle, Henry Adams wrote to his brother:

You may think all this nonsense, but I tell you these are great times. Man has mounted science and is now run away with. I firmly believe that before many centuries more, science will be the master of man. The engines he will have invented will be beyond his strength to control. Some day science may have the existence of mankind in its power, and the human race commit suicide by blowing up the world.

We might think all that nonsense, but it wasn’t that long ago that fears of a nuclear winter gripped our collective imagination. More recently, other technological scenarios have fueled our popular cultural nightmares: biogenetically cultivated global epidemics, robot apocalypses, or climate catastrophes. In each case, the things we have made “become Death, destroyer of worlds.” With Glass, the fear is not that we will blow up the world or unleash a global catastrophe. It is that we will simply innovate the humanity out of ourselves. Remembering how the story turned out, we might put the temptation this way: If we will place Glass before our eyes, they will be opened, and we will become as gods.

Of course, reading the symbolism of technology is not quite like reading palms or tea leaves. The symbols necessitate no particular future in themselves. But they are cultural indicators and as such they reveal something about us, and that is valuable enough.

A Hedgehog in a Fox’s World

I’m ordinarily reluctant to complain. This is partly a function of personality and partly a matter of conviction. I’m reticent by nature, and I tend to think that most complaining tends to be petty, self-serving, unhelpful, and tiresome.

That said, I’ve found myself complaining recently. I’m thinking of two separate incidents in the last week or so. In one exchange, I wrote to a friend that I was “Well enough, in that stretched-so-thin-people-can-probably-see-through-me kind of way.” In another conversation, I admitted that what annoyed me about my present situation, the situation that I’ve found myself in for the past few years, was that I was attempting to do so many things simultaneously I could do none of them well.

I teach in a couple of different settings, I’m trying to make my way through a graduate program, I’ve got a writing project that’s taken me much too long to complete, and I’d like to be a half-way decent husband. I could list other demands, but you get the idea. And, of course, those of you with children are reading this and saying, “Just you wait.” And that’s the thing: most people “feel my pain.” What I’m describing seems to be what it feels like to be alive for most people I know.

I was reminded of Isaiah Berlin’s famous discussion of the fox and the hedgehog. Expounding on an ancient Greek saying — “the fox knows many things, the hedgehog knows one big thing” — Berlin went on to characterize thinkers as either foxes or hedgehogs. Plato, Hegel, Nietzsche, for example, were hedgehogs; they had one big idea by which they interpreted the whole of experience. Aristotle, Montaigne, and Goethe were foxes; they were more attuned to the multifarious particularities of experience.

Berlin had intellectual styles in mind, but, if I may re-apply the proverb to the realm of action in everyday life, I find myself wanting to be a hedgehog. I want to do one thing and do it well. Instead, I find myself having to be a fox.

The more I thought about it, the more it seemed to me that this was, in fact, a pretty good way of thinking about the character of contemporary life and competing responses to the dynamics of digital culture.

Clearly, there are forces at play that predate the advent of digital technologies. In fact, part of the unsettled, constantly shifting quality of life I’m getting at is what sociologist Zygmunt Bauman has called “liquid modernity.” The solid structures of pre-modern, and even early modern society, have in our late-modern (postmodern, if you prefer) times given way to flux, uncertainty, and instability. (If you survey the titles of Bauman’s books over the last decade or so, you’ll quickly notice that Bauman has something of the hedgehog in him.)

The pace, structure, and dynamism of digital communication technologies have augmented these trends and brought their demands to bear on an ever larger portion of lived experience. In other words, multi-tasking, continuous partial attention, our skimming way of thinking, the horcrux-y character of our digital devices, the distraction/attention debates — all of this can be summed up by saying that we are living in a time where foxes are more likely to flourish than hedgehogs. Or, put more temperately, we are living in a time where foxes are more likely to feel at home than hedgehogs. This is great for foxes, of course, and may they prosper.

But what if you’re a hedgehog?

You cope and make due, of course. I don’t, after all, mean to complain.