Warning: A Liberal Education Leads to Independent Thinking

File this one under “Unintended Consequences.”

In the 1950’s, at the height of the Cold War, Bell Telephone Company of Pennsylvania put its most promising young managers through a rigorous 10-month training program in the Humanities with the help of the University of Pennsylvania.  During that time they participated in lectures and seminars, read voraciously, visited museums, attended the symphony, and toured Philadelphia, New York and Washington.  To top it off, many of the leading intellectuals of the time were brought in to lecture these privileged few and discuss their books.  Among the luminaries were poet W. H. Auden and sociologist David Reisman whose 1950 book, The Lonely Crowd, was a classic study of the set to which these men belonged.

The idea behind the program was simple.  Managers with only a technical background were competent at their present jobs, but they were not sufficiently well-rounded for the responsibilities of upper management.  As sociologist E. Digby Baltzell put it, “A well-trained man knows how to answer questions, they reasoned; an educated man knows what questions are worth asking.”  Already in the early 20th century “information overload” was deemed a serious problem for managers, but by the early 1950’s it was believed that computers were going to solve the problem. (I know.  That in itself is worth elaboration, but it will have to wait for another post.)  The automation associated with computers, however, ushered in a new problem — the danger that the manager would become a thoughtless, unoriginal, technically competent conformist.  Writing in 1961, Walter Buckingham warned against the possibility that automation would lead not only to a “standardization of products,” but also to a “standardization of thinking.”

But there were other worries as well.  It was feared that the Soviet Union was pulling away in the sheer numbers of scientists and engineers creating a talent gap between the USSR and America.  As a way of undercutting this advantage, many looked to the Humanities and a liberal education.  According to Thomas Woody, writing in 1950, “Liberal education was an education for free men, competent to fit them for freedom.”  Thus a humanistic education became not only a tool to better prepare business executives for the complexity of their jobs, it was a weapon against Communism.

In one sense, the program was a success.  The young men were reading more, their intellectual curiosity was heightened, and they were more open minded and able to see an argument from both sides.  There was one problem, however.  The Bell students were now less willing to be a cog in the corporate machinery.  Their priorities were reordered around family and community.  According to one participant, “Now things are different.  I still want to get along in the company, but I now realize that I owe something to myself, my family, and my community.”  Another put it this way,

Before this course, I was like a straw floating with the current down the stream.  The stream was the Bell Telephone Company.  I don’t think I will ever be like that straw again.

Consequently, the program began to appear as a threat to the company.  One other strike against the program:  a survey revealed that after passing through the program participants were likely to become more tolerant of socialism and less certain that a free democracy depended upon free business enterprise.  By 1960, the program was disbanded.

This is a fascinating story about the power of an education in the humanities to enlarge the mind and fit one for freedom.  But it is also a reminder that in an age of conformity, thinking for oneself is not always welcomed even if it is paid lip service.  After all, remember how well things turned out for Socrates.

__________________

A note about sources:  I first read about the Institute for Humanistic Studies for Executives in an op-ed piece by Wes Davis in the NY Times.  The story fascinated me and I subsequently found an article on the program written in the journal The Historian in 1998 by Mark D. Bowles titled, “The Organization Man Goes To College:  AT&T’s Experiment in Humanistic Education, 1953-1960.”  Quotes in the post are drawn from Bowles’ article.

Technological Momentum and Education

“There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.”  — Marshall McLuhan, The Medium is the Massage

Conversations about technology and education, in my experience, eventually invoke certain vague notions of inevitability. There is often talk about getting on the train before it leaves the station and all of that.  Perhaps it is the case that notions of inevitability will surface in most discussions about technology whether or not education is involved — the specter of technological determinism casts a long shadow. I am not a technological determinist. Nevertheless, I do believe technology influences us in significant ways. How do we describe this condition of being influenced, but not determined?

The concept of technological momentum employed by historian Thomas Hughes provides a helpful way of thinking about this question.  In Technology Matters: Questions to Live With, David Nye explains Hughes concept and offers some examples.

Hughes argues that technical systems are not infinitely malleable.  If technologies such as the bicycle or the automobile are not independent forces shaping history, they can still exercise a “soft determinism” once they are in place …

“Technological momentum” is not inherent in any technological system when first deployed. It arises as a consequence of early development and successful entrepreneurship, and it emerges at the culmination of a period of growth. The bicycle had such momentum in Denmark and the Netherlands from 1920 until the 1960s, with the result that a system of paved trails and cycling lanes were embedded in the infrastructure before the automobile achieved momentum. In the United States, the automobile became the center of a socio-technical system more quickly and achieved momentum a generation earlier. Only some systems achieve “technological momentum” …. The concept seems particularly useful for understanding large systems. These have some flexibility when being defined in their initial phases. But as technical specifications are established and widely adopted, and as a system comes to employ a bureaucracy and thousands of workers, it becomes less responsive to outside pressures …

Hughes makes clear when discussing “inertia” that the concept is not only technical but also cultural and institutional. A society may choose to adopt either direct current or alternating current, or to use 110 volts, or 220 volts, or some other voltage, but a generation after these choices have been made it is costly and difficult to undo such a decision. Hundreds of appliance makers, thousands of electricians, and millions of homeowners have made a financial commitment to these technical standards. Furthermore, people become accustomed to particular standards and soon begin to regard them as natural. Once built, an electrical grid is “less shaped by and more the shaper of its environment.” This may sound deterministic, but it is not entirely so, for people decided to build the grid and selected its specifications and components. To later generations, however, such technical systems seem to be deterministic.

Coming back to the more specific topic of technology in education in light of Nye’s observations, I want to suggest that teachers and administrators think carefully about the implementation of technology, particularly in its early stages.  There is no inevitability.  We have choices to make.  Those choices may lead to the adoption of certain technologies and corresponding practices, and later the institutionalization of those technologies and practices may eventually make it very hard to discard them.  This kind of inertia is what retrospectively makes the adoption and implementation of certain technologies appear inevitable.  But at the outset, there were choices to be made.

It is probably the case that in some circumstances the choice is not really a choice at all.  For example, in certain industries one may either have to constantly adopt and adapt or else lose business and fail.  Exercise of choice may also lead to marginalization — witness the Amish.  Choices come with consequences and costs.  I grant that those costs may sometimes amount to coercive pressure.

Perhaps education is one of these industries (calling it such is already to prejudice the matter) in which this sort of coercive pressure exists.  One hopes, however, that better aims and ideals are steering the ship. Teachers and administrators need to be clear about their philosophy of education, and they need to allow their vision for education to drive their choices about the adoption and implementation of new technology. If they are not self-conscious and intentional in this respect, and if they view technology merely as a neutral set of tools at their disposal, they will be disappointed and frustrated.

As media theorists have noted, the ecological metaphor can be a helpful way of thinking about and understanding our technologies. Once a new element is introduced into an ecosystem, we don’t get the same ecosystem plus a new element; we get a new ecosystem. The consequences may be benign, or they could be destructive. Think of the classroom as an ecosystem; the introduction of new technologies reconstitutes the classroom’s media ecosystem. Consequently, the adoption and implementation of new classroom technologies should be guided by clear thinking about how new technologies alter the learning environment and a sober estimation of their compatibility with a school’s philosophy of education.

 

Life Amid Ruins

In Status Anxiety — his part philosophically-minded self-help book, part social history — Alain de Botton describes two fashions that were popular in the art world during the 17th and 18th century respectively.  The first, vanitas art, took its name from the biblical book of Ecclesiastes in which it is written, “Vanity of vanity, all is vanity.”  Vanitas art which flourished especially in the Netherlands, and also in Paris, was, as the biblical citation implies, concerned with life’s fleeting nature.  As de Botton describes them,

Each still-life featured a table or sideboard on which was arranged a contrasting muddle of objects.  There might be flowers, coins, a guitar or mandolin, chess pieces, a book of verse, a laurel wreath or wine bottle: symbols of frivolity and temporal glory.  And somewhere among these would be set the two great symbols of death and the brevity of life:  a skull and an hourglass.

A bit morbid we might think, but as de Botton explains,

The purpose of such works was not to send their viewers into a depression over the vanity of all things; rather, it was to embolden them to find fault with particular aspects of their own experience, while at the same time attending more closely to the virtues of love, goodness, sincerity, humility and kindness.

Okay, still a bit morbid you might be thinking, but fascinating nonetheless.  Here is the first of two examples provided in Status Anxiety:

Philippe de Champaigne, circa 1671

Here is the second example:

Simon Renard de Saint-Andre, circa 1662

And here are a few others from among the numerous examples one can find online:

Edwart Collier, 1640
Pieter Boel, 1663
Adam Bernaert, circa 1665
Edwart Collier, 1690

Less morbid and more nostalgic, the second art fashion de Botton examines is the 18th and 19th century fascination with ruins.  This fascination was no doubt inspired in part by the unearthing of Pompeii’s sister city, Herculaneum, in 1738.  The most intriguing subset of these paintings of ancient ruins, however, were those paintings that imagined not the past in ruins, but the future.  “A number of artists,” according to de Botton, “have similarly delighted in depicting their own civilization in a tattered future form, as a warning to, and reprisal against, the pompous guardians of the age.”  Consider these the antecedents of the classic Hollywood trope in which some famous city and its monuments lies in ruins — think Planet of the Apes and the Statue of Liberty.

Status Anxiety provides three examples these future ruins.  The first depicts the Louvre in ruins:

Hubert Robert, Imaginary View of the Grande Gallerie of the Louvre in Ruins, 1796

The second depicts the ruins of the Bank of London:

Joseph Gandy, View of the Rotunda of the Bank of England in Ruins, 1798

And the third, from a later period, depicts the city of London in ruins being sketched by a man from New Zealand, “the country that in Dore’s day symbolized the future,” in much the same way that Englishmen on their Grand Tours would sketch the ruins of Athens or Rome.

Gustav Dore, The New Zealander, 1871

Finally, both of these art fashions suggested to my mind Nicolas Poussin’s Et in Arcadia ego from 1637-1638:

Nicolas Poussin, Et in Arcadia ego, 1637-38

Here, shepherds stumble upon some ancient tomb in which they read the inscription, Et in Arcadia ego.  There has been some debate about the precise way the phrase should be taken.  It may be read as the voice of death personified saying “even in Arcadia I exist,” or it may mean “the person buried in this tomb lived in Arcadia.”  In either case the moral is clear.  Death comes for the living.  It is a memento mori, a reminder of death (note the appearance of that phrase in the last piece of vanitas art above).

Admittedly, these are not the most uplifting of reflections.  However, de Botton’s point and the point of the artists who painted these works strikes me as sound:  we make a better go of the present if we live with the kind of perspective engendered by these works of art.  Our tendency to ignore our mortality and our refusal to acknowledge the limitations of a single human life may be animating much of our discontent and alienation.  Perhaps.  Certainly there is some wisdom here we may tap into.  This is pure conjecture, of course, but I wonder how many, having contemplated Gandy’s paiting, would have found the phrase “too big too fail” plausible?  Might we not, with a renewed sense of our mortality, reorder some of our priorities bringing into sharper focus the more meaningful elements of life?

It is also interesting to consider that not only do we have few contemporary equivalents of the kind of art work we’ve considered, but neither do we have any actual ruins in our midst.  America seems uniquely prepared to have been a country without a sense of the past.  Not only are we an infant society by historical standards, but even the ancient inhabitants of our lands, unlike those further south, left no monumental architecture, no tokens of antiquity.  Those of us who live in suburbia may go days without casting our eyes on anything older than twenty years.  We have been a forward looking society whose symbolic currency has not been — could not have been — ruins of the past, but rather the Frontier with its accent on the future and what may be.

I would not call into question the whole of this cultural sensibility, but perhaps we could have used just a few ruins.

“Are you really there?” — How not to become specatators of our lives

If you try to keep up with the ongoing debate regarding the Internet and the way it is shaping our world and our minds, you will inevitably come across the work of Jaron Lanier.  When you do, stop and take note.  Lanier qualifies as an Internet pessimist in Adam Thierer’s breakdown of The Great Debate over Technology’s Impact on Society, but he is an insightful pessimist with a long history in the tech industry.  Unlike other, often insightful, critics such as the late Neil Postman and Nicholas Carr, Lanier speaks with an insider’s perspective.  We noted his most recent book, You Are Not a Gadget: A Manifesto,not long ago.

Earlier this week, I ran across a short piece Lanier contributed to The Chronicle of Higher Education in response to the question, “What will be the defining idea of the coming decade, and why?” Lanier’s response, cheerfully titled “The End of Human Specialness,” was one of a number of responses solicited by The Chronicle from leading scholars and illustrators.  In his piece, Lanier recalls addressing the “common practice of students blogging, networking, or tweeting while listening to a speaker” and telling his audience at the time,

The most important reason to stop multitasking so much isn’t to make me feel respected, but to make you exist. If you listen first, and write later, then whatever you write will have had time to filter through your brain, and you’ll be in what you say. This is what makes you exist. If you are only a reflector of information, are you really there?

We have all experienced it; we know exactly what Lanier is talking about.  We’ve seen it happen, we’ve had it happen to us, and — let’s be honest — we have probably also been the offending party.  Typically this topic elicits a rant against the incivility and lack of respect such actions communicate to those who are on the receiving end, and that is not  unjustified.  What struck me about Lanier’s framing of the issue, however, was the emphasis on the person engaged in the habitual multitasking and not on the affront to the one whose presence is being ignored.

We are virtually dispersed people.  Our bodies are in one place, but our attention is in a dozen other places and, thus,  nowhere at all.  This is not entirely new; there are antecedents.  Long before smart phones enabled a steady flow of distraction and allowed us to carry on multiple interactions simultaneously, we wandered away into the daydreams our imagination conjured up for us.  My sense, however, is that such retreats into our consciousness are a different sort of  thing than our media enabled evacuations of the place and moment we inhabit.  For one thing, they were not nearly so frequent and intrusive. We might also argue that when we daydream our attention is in fact quite focused in one place, the place of our dream.  We are somewhere rather than nowhere.

Whatever we think of the antecedents, however, it is clear that many of us are finding it increasingly difficult to be fully present in our own experience.  Perhaps part of the what is going on is captured by the old adage about the man with a hammer to whom everything looks like a nail.  My most vivid experience with this dynamic came years ago with my first digital camera.  To the person with a digital camera (and enough memory), I discovered, everything looks like a picture and you can’t help but take it.  I have wonderful pictures of Italy, but very few memories.  And so we may  extrapolate:  to the person with a Twitter account, everything is a tweet waiting to be condensed into 140 characters.  To the person with a video recorder on their phone, everything is a moment to be documented.  To the person with an iPhone … well, pick the App.

In an article written by professor Barry Mauer, I recently learned about Andy Warhol’s obsessive documentation of his own experience through photographs, audiotape, videotape, and film.  In his biography of Warhol, Victor Bockris writes,

Indeed, Andy’s desire to record everything around him had become a mania.  As John Perrault, the art critic, wrote in a profile of Warhol in Vogue:  “His portable tape recorder, housed in a black briefcase, is his latest self-protection device.  The microphone is pointed at anyone who approaches, turning the situation into a theater work.  He records hours of tape every day but just files the reels away and never listens to them.”

Warhol’s behavior would, I suspect, seem less problematic today.  Here too he was perhaps simply ahead of his time.  Given much more efficient tools, we are also obsessively documenting our lives.  But what most people do tends to be viewed as normal.  It is interesting, though, that Perrault referred to Warhol’s tape recorder as a “self-protective device.”  It called to mind R. R. Reno’s analysis of the pose of ironic detachment so characteristic of our society:

We enjoy an irony that does not seek resolution because it supports our desire to be invulnerable observers rather than participants at risk. We are spectators of our lives, free from the strain of drama and the uncertainty of a story in which our souls are at stake.

“Spectators of our lives.”  The phrase is arresting, and the prospect is unsettling.  But it is hardly necessary or inevitable.  If the cost of re-engaging our own lives, of becoming participants at risk in the unfolding drama of our own story is a few less photos that we may end up deleting anyway, one less Facebook update from our phone, or one text left unread for a short while, then that is a price well worth paying.  We will be better for it, and those others, in whose presence we daily live, will be as well.

“With Only Their Labor to Sell”

Glenn Beck drew a crowd and a good deal of commentary from across the political and religious spectrum.  I wasn’t at Beck’s “Restoring Honor” rally this past weekend and I haven’t spoken to any one who was, but I did come across a number of articles, editorials, and blog posts that offered their take on what was going on.  Needless to say, it wasn’t all positive.  But, as if to demonstrate that people remain more complex than our tendency to reduce the world into binary oppositions suggests, the most scathing review I read came from conservative, Southern Baptist seminary professor Russell Moore, and one of the more self-consciously open-minded pieces came from the LGBT Editor of Religion Dispatches, Alex McNeill.  Ross Douthat, who was at the rally, offered his take on the political implications of the “apolitical” rally in  his NY Times editorial and a follow-up blog post.

There is not much that I would care to add.  I’m basically in agreement with Moore, but rather preferred the sensibility to the actual people at the rally demonstrated by McNeil.  But thinking about the rally put me in mind to comment on a couple of other pieces I’d read within the last few days.  Beck (along with Limbaugh, Hannity, and company) tends to symbolize for many people the marriage of free market economics with cultural conservatism that came to dominate the political right from the late ’70’s through the present.  Essentially it is the Reagan coalition.  But what if that marriage was an inherently unstable mixture?

Sometime ago I was struck by a particular formulation offered by historian Eric Miller of Christopher Lasch’s critique of the both ends of the political spectrum.  According to Lasch, both ends harbored a fatal tension.  The Left called for socially conscious and active individuals while promoting a vision of the self that was atomized and unencumbered.  The Right called for the preservation of moral tradition and community while promoting an economic order that undermined those very institutions.  This remains, to my mind, a very apt summation of our current political situation.

In two recent essays, Philip Blond and Jonny Thakkar call for what they have respectively termed Red Toryism and Left Conservatism.  Neither Blond nor Thakkar cite Lasch, but they each channel Lasch’s analysis of the inner tension within modern conservatism’s attachment to free market ideology.

In “Shattered Society,” Blond, a London based academic turned political activist, laments the loss of mediating institutions which sheltered individuals from the power of the state and the market.

The loss of our culture is best understood as the disappearance of civil society. Only two powers remain: the state and the market. We no longer have, in any effective independent way, local government, churches, trade unions, cooperative societies, or civic organizations that operate on the basis of more than single issues. In the past, these institutions were a means for ordinary people to exercise power. Now mutual communities have been replaced with passive, fragmented individuals.

And according to Blond, “Neither Left nor Right can offer an answer because both ideologies have collapsed as both have become the same.”  The left lives by an “agenda of cultural libertarianism” while the right espouses an agenda of “economic libertarianism,” and there is, in Blond’s view, little or no difference between them.  They have both contributed to a shattered society.  “A vast body of citizens,” Blond argues, “has been stripped of its culture by the Left and its capital by the Right, and in such nakedness they enter the trading floor of life with only their labor to sell.”

In the provocatively titled, “Why Conservatives Should Read Marx,” Thakkar argues that there is no compelling reason for conservatives to wed themselves to free market ideology.  He cites Samuel Huntington who described conservatism as a “‘situational’ ideology which necessarily varies from place to place and time to time …” “The essence of conservatism,” Huntington believed, “is the passionate affirmation of the values of existing institutions.”

Following anthropologist Arnold Gehlen, Thakkar assumes that habits and routines and the cultural institutions that support them are necessary for human flourishing.  These culturally inculcated habits and routines function as instincts do for other animals.  Apart from them we would be “prone to unbearable cognitive overload.”  A predicament that is all the more palpable at present than when Gehlen wrote in the middle of the last century.

But following Marx, Thakkar believes that it is in the nature of capitalism to undermine existing social and cultural institutions.  The reason is simple.  Competition necessarily drives technological innovation (not necessarily a bad thing of course!), and technological innovation in the realm of economic production elicits social change as well.  “To the degree that technological change is built into capitalism,” Thakkar summarizes, “so must institutional change be.  In every single generation certain institutions will become obsolete, and with them their attendant practices and values.”

Whatever one may think about the merits of this process, it certainly isn’t inherently conservative.  As Thakkar writes further on, “In theory it is possible to be an economic libertarian and a social conservative; in practice the two are irreconcilable.”

You can read both pieces to get the whole of their respective arguments as well as their proposals for moving forward.  Neither Thakkar nor Blond claim to be against the free market, but they are both in favor of re-prioritizing the health of society, particularly its mediating institutions.  In Blond’s view, this can lead to a “popular capitalism” that entails “a market economy of widely disbursed property, of multiple centers of innovation, of the decentralization of capital, wealth, and power.”

For Thakkar, this means pursuing a “commitment to think each case through on its own merits:  if something is harmful or unjust, we should try to change it; but if something valuable is being destroyed, we should try to conserve it,” rather than blindly submitting to the demands of the growth economy.

Whether we agree with the details of their policy suggestions or not, it seems to me that both Thakkar and Blond, like Lasch before them, have perceptively diagnosed the inner tensions of the political right (and left) and the cultural consequences of those tensions.