“In the Vineyard of the Text”

 

That is the title of a slender volume authored by historian and social critic Ivan Illich in 1993 and subtitled, A Commentary to Hugh’s Didascalicon.  The Hugh in question is Hugh of St. Victor, a medieval theologian, philosopher, and mystic who wrote the Didascalicon around 1128.  According to Illich, Hugh’s Didascalicon was the “first book written on the art of reading” and  Illich believes that revisiting this 12th century work will help us better understand our ongoing transition from “bookish” reading to whatever multiple forms of reading have been emerging in the wake of the electronic or digital revolution.

In the spirit of the commonplace book, I’m going to transcribe some excerpts from In the Vineyard of the Text below and in subsequent posts over the next several days.  Realizing that there are countless books I would like to read and for a variety of reasons will simply not ever get to, I appreciate it when I can nonetheless get some gleanings from one of those books through a review or post that excerpts some of the key ideas worth considering.  Hardly the best form of reading, but nonetheless a useful one I think.  So in the hope that this will prove useful and because I do think Illich wrote a remarkably insightful little book, here are some selections from the Introduction:

  • “I concentrate my attention on a fleeting but very important moment in the history of the alphabet when, after centuries of Christian reading, the page was suddenly transformed from a score for pious mumblers into an optically organized text for logical thinkers.  After this date a new kind of classical reading became the dominant metaphor for the highest form of social activity.”
  • “Quite recently reading-as-a-metaphor has been broken again.  The picture and its caption, the comic book, the table, box, and graph, photographs, outlines, and integration with other media demand from the user textbook habits which are contrary to those cultivated in scholastic readerships.”
  • Quoting George Steiner:  “The development of the modern book and of book-culture as we know it seems to have depended on a comparable fragility of crucial and interlocking factors.”
  • “Classical print culture was an ephemeral phenomenon.  According to Steiner, to belong to ‘the age of the book’ meant to own the means of reading.  The book was a domestic object; it was accessible at will for re-reading.  The age presupposed private space and the recognition of the right to periods of silence, as well as the existence of echo-chambers such as journals, academies, or coffee circles.  Book culture required a more or less agreed-upon canon of textual values and modes . . . . [T]he formalities involved in this one kind of reading defined, and did not just reflect, the dimensions of social topology.”
  • “The book has now ceased to be the root-metaphor of the age; the screen has taken its place.  The alphabetic text has become but one of many modes of encoding something, now called ‘the message.'”
  • “Bookish reading can now clearly be recognized as an epochal phenomenon and not as a logically necessary step in the progress toward the rational use of the alphabet; as one mode of interaction with the written page among several; as a particular vocation among many, to be cultivated by some, leaving other modes to others.”
  • “. . . in the first six chapters I describe and interpret a technical breakthrough which took place  around 1150, three hundred years before movable type came into use.  This breakthrough consisted in the combination of more than a dozen technical inventions and arrangements through which the page was transformed from score to text.  Not printing, as is frequently assumed, but this bundle of innovations, twelve generations earlier, is the necessary foundation for all stages through which bookish culture has gone since.  This collection of techniques and habits made it possible to imagine the ‘text’ as something detached from the physical reality of  a page.  It both reflected and in turn conditioned a revolution in what learned people did when they read — and what they experienced reading to mean.”
  • “By centering our analysis on the object that is shaped by letters, and on the habits and fantasies connected with its use, we turn this object into a mirror reflecting significant transformations in the mental shape of western societies, something not easily brought out by other approaches.”
  • “What had started as a study in the history of technology, ended up as a new insight into the history of the heart.  We came to understand Hugh’s ars legendi [art of reading] as an ascetic discipline focused by a technical object.  Our meditation on the survival of this mode of reading under the aegis of the bookish text led us to enter upon a historical study of an asceticism that faces the threat of computer ‘literacy.'”

The Introduction as a whole challenges us to think again about the activity of reading.  We assume reading to be a singular activity, we assume it is always done in the same way and for the same reasons, we do not think of the alphabet as a technology that can be deployed in multiple modes, we do not think of a whole cultural milieu depending on something so mundane as reading, we do not think that changes in reading habits and assumptions about reading could reorder society.  By taking us back some 900 years Illich aims to show us that these are mistaken assumptions.

More to come.

“The Past Is Never Dead”

In the wake of the Protestant Reformation, religious violence tore across Europe.  The Wars of Religion, culminating with the Thirty Years’ War, left the continent scarred and exhausted.  Out of ashes of war the secular nation state arose to establish a new political order which privatized religion and enshrined reason and tolerance as the currency of the public sphere ensuring an end to irrational violence.

That is one of the more familiar historical narratives that we tell ourselves.  It is sweeping and elegant in its scope and compelling in its explanatory power.  There’s only one problem according to William Cavanaugh:  it’s not true.  Cavanaugh lays out his case in his most recent book, The Myth of Religious Violence: Secular Ideology and the Roots of Modern Conflict (Oxford UP, 2009).  Needless to say, he has his work cut out for him.  The narrative he seeks to deconstruct is deeply entrenched and we’ve staked a lot on it.  His point, to be clear, is not that religion has never been implicated in violence.  As he puts it elsewhere, “Given certain conditions, Christianity, Islam, and other faiths can and do contribute to violence.”  Rather, he is contesting the particular historical narrative whereby the secular nation state arises in response to religious violence in order to secure peace for society by marginalizing religious practice and discourse.

To begin with, Cavanaugh demonstrates that the very concept of religion is problematic and thus renders any neat parsing of violence into either religious or secular categories tenuous at best.  Moreover, the nation state precedes the so called wars of religion and is best seen as a contributing cause, not an effect of the wars.  The historical realities of the wars resist the simplistic “Wars of Religion” schema anyway.  For example, during the  Thirty Years’ War, Catholics were at times fighting other Catholics and sometimes in league with Protestants.  The Thirty Years’ War as it turns out was a scramble by competing dynasties and rising national governments to fill the power vacuum created by the collapse of the medieval political order.  Furthermore, Cavanaugh suggests that the state co-opted and assumed for itself many of  the qualities of the church creating, as one reviewer put it, “its own sacred space, with its own rituals, hymns, and theology, and its own universal mission.”  In the end, the secular nation state, particularly in its 20th century totalitarian apotheosis, hardly appears as the champion of reason, peace, and tolerance.  The nation state secured its status by monopolizing the use of coercive force.  In doing so, however, it clearly did not put an end to violence.

Cavanaugh presents a counter-intuitive thesis and he takes care to make his case.  It is a case he has been working out since 1995 when, as a graduate student, he published “‘A fire strong enough to consume the house:’ The Wars of Religion and the Rise of the Nation State.” In the intervening years he has honed and strengthened his argument which finds mature expression in the The Myth of Religious Violence.

Whether one ultimately agrees with Cavanaugh’s thesis or not, his work highlights two important considerations regarding historical narratives.  First, historical reality is usually more complex than the stories we tell, and the complexity matters.  We are living through a cultural moment when historical awareness is a rare commodity, so perhaps we shouldn’t complain too much about shallow historical knowledge when the alternative may be no historical knowledge.  But that said, much of what does pass for historical knowledge too frequently is filtered through Hollywood, the entertainment industry, or the talk-show circuit, and for all these subtlety must necessarily be sacrificed to the demands of the medium.  The big picture sometimes is painted at the expense of important details, so much so that the big picture is rendered misleading.

Perhaps most days of the week, this is not a terribly important consideration.  But it can become very significant under certain circumstances.  When a historical narrative is hotly contested and passionately defended it is usually because the real battle is over the present.  Consider heated debates about the Christian or secular origins of the American constitutional order, or arguments over the causes of the American Civil War and Southern identity.  Leaving the terrain of American history, consider the Armenian genocide, the Japanese atrocities at Nanking, or the the tangled history of the Balkans.  In each case the real issue is clearly not the accuracy of our historical memory so much as it is the perceived implications for the present.  In other words, we fight for our vision for the present on the battlefield of the past.  This raises a host of other questions related to the status of arguments from history, philosophies of history, and historiography.  These sorts of questions, however, are rarely raised at rallies or on television — it would be hard to fit them on a placard or in a 10 second sound bite.

A debate about the origins of the modern nation state is likewise about more than historical accuracy.  Critics of religion and the place of religion in public life, Christopher Hitchens and Sam Harris for example, have made the historical narrative we began with a key component of their case — religion kills, the secular state saves.  Cavanaugh has offered the compelling rejoinder.

Either way it appears Faulkner was right:  “The past is never dead.  It’s not even past.”

_____

You can listen to a lecture and link to a number of essays by Cavanaugh here.

Clever Blog Post Title

Reading about Jon Stewart’s recent D. C. rally, I noticed a picture of a participant wearing a “V for Vendetta” mask and carrying a sign.  It wasn’t the mask, but the words on the sign that caught my eye.  The sign read, “ANGRY SIGN.”

I wouldn’t have paid much attention to it had I not recently crossed paths with a guy wearing a T-shirt that read:  “Words on a T-shirt.”  Both of these then reminded me of the very entertaining, “Academy Award Winning Trailer” — a self-referential spoof of movie trailers embedded at the end of this post.  The “trailer” opens with a character saying, “A toast — establishing me as the wealthy, successful protagonist.”  And on it goes in that vein.  The comments section on the clip’s Youtube page likewise yielded a self-referential parody of typical Youtube comments.  Example:  “Quoting what I think is the funniest part of the video and adding “LOL” at the end of the comment.”

There is a long tradition of self-reference in the art world, Magritte’s pipe is classic example, but now it seems the self-referentiality that was a mark of the avant-garde is an established feature of popular culture.  Call it vulgar self-referentiality if you like.  Our sophistication, in fact, seems to be measured by the degree of our reflexivity and self-awareness — “I know, that I know, that I know, etc.” — which elides with a mood of ironic detachment.   Earnestness in this environment becomes a vice.  We are in a sense “mediating” or re-presenting our own selves through this layer of reflexivity, and we’re pretty sure everyone else is too.

On the surface perhaps, there is a certain affinity with the Socratic injunction, “Know thyself.”  But this is meta-Socrates, “Knowingly know thyself.”  At issue for some is whether there is  a subject there to know that would localize and contain the knowing, or whether in the absence of a subject all there is to know is a process of knowing, knowing itself.

Leaning on Thomas de Zengotita’s Mediated: How the Media Shapes Your World and the Way You Live in It and Hannah Arendt’s The Human Condition, here’s a grossly oversimplified thumbnail genealogy of our heightened self-consciousness.  On the heels of the Copernican revolution, we lost confidence in the ability of our senses to apprehend the world truthfully (my eyes tell me the world is stationary and the sun moves, turns out my eyes lied), but following Descartes we sought certainty in our inner subjective processes — “I think, therefore I am” and all that.  Philosophically then our attention turned to our own thinking — from the object out there, to the subject in here.

Sociologically, modernity has been marked by an erosion of cultural givens; very little attains a taken-for-granted status relative to the pre-modern and early modern world.  In contrast to the medieval peasant, consider how much of your life is not pre-determined by cultural necessity.  Modernity then is marked by the expansion of choice, and choice ultimately foregrounds the act of choosing which yields an attendant lack of certainty in the choice – I am always aware that I could have chosen otherwise.  In other words, identity grounded in the objective (reified) structures of traditional society is superseded by an identity which is the aggregate of the choices we make — choice stands in for fate, nature, providence, and all the rest.  Eventually an awareness of this process throws even the notion of the self into question; I could have chosen otherwise, thus I could be otherwise.  The self, as the story goes, is decentered.  And whether or not that is really the case, it certainly seems to be what we feel to be the case.

So the self-referentialism that marked the avant-garde and the theories of social constructivism that were controversial a generation ago are by now old news.  They are widely accepted by most people under 35 who didn’t pick it all up by visiting the art houses or reading Derrida and company, but by living with and through the material conditions of their society.

First the camera, then the sound recorder, the video recorder, and finally the digitization and consequent democratization of the means of production and distribution (cheap cameras/Youtube, etc.) created a society in which we all know that we know that we are enacting multiple roles and that no action yields a window to the self, if by self we mean some essential, unchanging core of identity.  Foucault’s surveillance society has produced a generation of method actors. In de Zengotita’s words,

Whatever the particulars, to the extent that you are mediated, your personality becomes an extensive and adaptable tool kit of postures . . . You become an elaborate apparatus of evolving shtick that you deploy improvisationally as circumstances warrant.  Because it is all so habitual, and because you are so busy, you can almost forget the underlying reflexivity.

Almost.

There seems to be a tragically hip quality to this kind of hyper-reflexivity, but it is also like looking into a mirror image of a mirror — we get mired in an infinite regress of our own consciousness.  We risk self-referential paralysis, individually and culturally, and we experience a perpetual crisis of identity.  My sense is that a good deal of our cynicism and apathy is also tied into these dynamics.  Not sure where we go from here, but this seems to be where we are.

Or maybe this is all just me.

“The Things We Make, Make Us”

A while ago I posted about a rather elaborate Droid X commercial which featured a man’s arm morphing into a mechanical, cyborg arm from which the Droid phone then emerges.  This commercial struck me as a useful and vivid illustration of an important theoretical metaphor, deployed by Marshall McLuhan among others, to help us understand the way we relate to our technologies:  technology as prosthesis.

This weekend I came across another commercial that once again captured, intentionally or otherwise, an important element of our relationship with technology.  This time it was a commercial (which you can watch below) for the 2011 Jeep Grand Cherokee.  The commercial situates the new Grand Cherokee within the mythic narrative of American technology.  “The things that make us Americans,” we are told in the opening lines, “are the things we make.”  In 61 seconds flat we are presented with a series of iconic American technologies:  the railroad, the airplane, the steel mill, the cotton gin, the Colt .45, the skyscraper, the telegraph, the light bulb, and, naturally, the classic Jeep depicted in World War II era footage.  As if to throw in the mythical/American kitchen sink, at one point the image of a baseball hitting a catcher’s mitt is flashed on the screen.

(Never mind that a rather dark tale could be woven out of the history of the development and deployment of many of these technologies and that their sacred quality was lost on some rather consequential Americans including Nathaniel Hawthorne and Herman Melville.  For more on the place of technology in American history and culture take a look at David Nye’s American Technological Sublime and Leo Marx’s The Machine in the Garden: Technology and the Pastoral Ideal in America.)

In any case, the commercial closes with the line, “The things we make, make us.”  I suspect that this is an instance of someone speaking better than they know.  My guess is that the intended meaning is something like, “Making and inventing is part of what it means to be an American.”  We build amazing machines, that’s who we are.  But there is a deeper sense in which it is true that “the things we make, make us.”  We are conditioned (although, I would want to argue, not wholly determined) creatures.  That is to say that we are to some degree a product of our environment and that environment is shaped in important respects by the tools and technologies encompassed by it.

Nothing new here, this is a point made in one way or another by a number of observers and critics.  For example consider the argument advanced by Walter Ong in Orality and Literacy.  The technology of writing, Ong contends, transforms oral societies and the way its members experience the world.  Ong and others have explored the similar significance of printing to the emergence of modern society and modern consciousness.  Lewis Mumford famously suggested that it is to the invention of the clock that we owe the rise of the modern world and the particular disposition toward time that characterizes it.  Historians and social critics have also explored the impact of the steam engine, the car, the telephone, the radio, the television, and, most recently, the Internet on humans and their world.  Needless to say, we are who we are in part because of the tools that we have made and that now are in turn making us.  And as I’ve noted before, Katherine Hayles (and she is not alone) goes so far as to suggest that as a species we have “codeveloped with technologies; indeed, it is no exaggeration,” she writes in Electronic Literature, “to say modern humans literally would not have come into existence without technology.”

Now this may be a bit more than what Jeep had in mind, but thanks to their commercial we are reminded of an interesting and important facet of the human condition.

Warning: A Liberal Education Leads to Independent Thinking

File this one under “Unintended Consequences.”

In the 1950’s, at the height of the Cold War, Bell Telephone Company of Pennsylvania put its most promising young managers through a rigorous 10-month training program in the Humanities with the help of the University of Pennsylvania.  During that time they participated in lectures and seminars, read voraciously, visited museums, attended the symphony, and toured Philadelphia, New York and Washington.  To top it off, many of the leading intellectuals of the time were brought in to lecture these privileged few and discuss their books.  Among the luminaries were poet W. H. Auden and sociologist David Reisman whose 1950 book, The Lonely Crowd, was a classic study of the set to which these men belonged.

The idea behind the program was simple.  Managers with only a technical background were competent at their present jobs, but they were not sufficiently well-rounded for the responsibilities of upper management.  As sociologist E. Digby Baltzell put it, “A well-trained man knows how to answer questions, they reasoned; an educated man knows what questions are worth asking.”  Already in the early 20th century “information overload” was deemed a serious problem for managers, but by the early 1950’s it was believed that computers were going to solve the problem. (I know.  That in itself is worth elaboration, but it will have to wait for another post.)  The automation associated with computers, however, ushered in a new problem — the danger that the manager would become a thoughtless, unoriginal, technically competent conformist.  Writing in 1961, Walter Buckingham warned against the possibility that automation would lead not only to a “standardization of products,” but also to a “standardization of thinking.”

But there were other worries as well.  It was feared that the Soviet Union was pulling away in the sheer numbers of scientists and engineers creating a talent gap between the USSR and America.  As a way of undercutting this advantage, many looked to the Humanities and a liberal education.  According to Thomas Woody, writing in 1950, “Liberal education was an education for free men, competent to fit them for freedom.”  Thus a humanistic education became not only a tool to better prepare business executives for the complexity of their jobs, it was a weapon against Communism.

In one sense, the program was a success.  The young men were reading more, their intellectual curiosity was heightened, and they were more open minded and able to see an argument from both sides.  There was one problem, however.  The Bell students were now less willing to be a cog in the corporate machinery.  Their priorities were reordered around family and community.  According to one participant, “Now things are different.  I still want to get along in the company, but I now realize that I owe something to myself, my family, and my community.”  Another put it this way,

Before this course, I was like a straw floating with the current down the stream.  The stream was the Bell Telephone Company.  I don’t think I will ever be like that straw again.

Consequently, the program began to appear as a threat to the company.  One other strike against the program:  a survey revealed that after passing through the program participants were likely to become more tolerant of socialism and less certain that a free democracy depended upon free business enterprise.  By 1960, the program was disbanded.

This is a fascinating story about the power of an education in the humanities to enlarge the mind and fit one for freedom.  But it is also a reminder that in an age of conformity, thinking for oneself is not always welcomed even if it is paid lip service.  After all, remember how well things turned out for Socrates.

__________________

A note about sources:  I first read about the Institute for Humanistic Studies for Executives in an op-ed piece by Wes Davis in the NY Times.  The story fascinated me and I subsequently found an article on the program written in the journal The Historian in 1998 by Mark D. Bowles titled, “The Organization Man Goes To College:  AT&T’s Experiment in Humanistic Education, 1953-1960.”  Quotes in the post are drawn from Bowles’ article.