The Humanities, the Sciences, and the Nature of Education

Over the last couple of months, Steven Pinker and Leon Wieseltier have been trading shots in a debate about the relationship between the sciences and the humanities. Pinker, a distinguished scientist whose work ranges from linguistics to cognitive psychology, kicked things off with an essay in The New Republic titled, “Science is Not Your Enemy.” This essay was published with a video response by Wieseltier, the The New Republic’s longstanding literary editor, already embedded. It was less than illuminating. A little while later, Wieseltier published a more formal response, which someone unfortunately titled, “Crimes Against the Humanities.” A little over a week ago, both Pinker and Wieseltier produced their final volleys in “Science v. the Humanities, Round III.”

I’ll spare you a play-by-play, or blow-by-blow as the case may be. If you’re interested, you can click over and read each essay. You might also want to take a look at Daniel Dennett’s comments on the initial exchange. The best I can do by way of summary is this: Pinker is encouraging embattled humanists to relax their suspicions and recognize the sciences as friends and ally from which they can learn a great deal. Wieseltier believes that any “consilience” with the sciences on the part of the humanities will amount to a surrender to an imperialist foe rather than a collaboration with an equal partner.

The point of contention, once some of the mildly heated rhetoric is accounted for, seems to be the terms of the relationship between the two sets of disciplines. Both agree that the sciences and the humanities should not be hermetically sealed off from one another, but they disagree about the conditions and fruitfulness of their exchanges.

If we must accept the categories, I think of myself as a humanist with interests that include the sciences. I’m generally predisposed to agree with Wieseltier to a certain extent, yet I found myself doing so rather tepidly. I can’t quite throw myself behind his defense of the humanities. Nor, however, can I be as sanguine as Pinker about the sort of consilience he imagines.

What I can affirm with some confidence is also the point Pinker and Wieseltier might agree upon: neither serious humanistic knowledge nor serious scientific knowledge appears to be flourishing in American culture. But then again, this surmise is mostly based on anecdotal evidence. I’d want to make this claim more precise and ground it in more substantive evidence.

That said, Pinker and Wieseltier both appear to have the professional sciences and humanities primarily view. My concern, however, is not only with the professional caste of humanists or scientists. My concern is also with the rest of us, myself included: those who are not professors or practitioners (strictly speaking), but who, despite our non-professional status, by virtue of our status as human beings seek genuine encounters with truth, goodness, and beauty.

To frame the matter in this way breaks free of the binary opposition that fuels the science/humanities wars. There is, ultimately, no zero-sum game for truth, goodness, and beauty, if these are what we’re after. The humanities and the sciences amount to a diverse set of paths, each, at their best, leading to a host of vantage points from which we might perceive the world truly, apprehend its goodness, and enjoy its beauty. Human culture would be a rather impoverished and bleak affair were only a very few of these path available to us.

I want to believe that most of us recognize all of this intuitively. The science/humanities binary is, in fact, a rather modern development. Distinctions among the various fields of human knowledge do have an ancient pedigree, of course. And it is also true that these various fields were typically ranked within a hierarchy that privileged certain forms of knowledge over others. However, and I’m happy to be corrected on this point, the ideal was nonetheless an openness to all forms of knowledge and a desire to integrate these various forms into a well-rounded understanding of the cosmos.

It was this ideal that, during the medieval era, yielded the first universities. It was this ideal, too, that animated the pursuit of the liberal arts, which entailed both humanistic and scientific disciplines (although to put it that way is anachronistic): grammar, logic, rhetoric, mathematics, music, geometry, and astronomy. The well-trained mind was to be conversant with each of these.

All well and good, you may say, but it seems as though seekers of truth, goodness, and beauty are few and far between. Hence, for instance, Pinker’s and Wieseltier’s respective complaints. The real lesson, after all, of their contentious exchange, one which Wieseltier seems to take at the end of last piece, is this: While certain professional humanists and scientists bicker about the relative prestige of their particular tribe, the cultural value of both humanistic and scientific knowledge diminishes.

Why might this be the case?

Here are a couple of preliminary thoughts–not quite answers, mind you–that I think relevant to the discussion.

1. Sustaining wonder is critical. 

The old philosophers taught that philosophy, the pursuit of wisdom, began with wonder. Wonder is something that we have plenty of as children, but somehow, for most of us anyway, the supply seems to run increasingly dry as we age. I’m sure that there are many reasons for this unfortunate development, but might it be the case that professional scientists and humanists both are partly to blame? And, so as not to place myself beyond criticism, perhaps professional teachers of the sciences and humanities are also part of the problem. Are we cultivating wonder, or are we complicit in its erosion?

2. Eduction is not merely the transmission of information

To borrow a formulation from T.S. Eliot: Information, that is an assortment of undifferentiated facts, is not the same as knowledge; and knowledge is not yet wisdom. One may, for example, have memorized all sorts of random historical facts, but that does not make one a historian. One may have learned a variety of mathematical operations or geometrical theorems, but that does not make one a mathematician. To say that one understands a particular discipline or field of knowledge is not necessarily to know every fact assembled under the purview of that field. Rather it is to be able to see the world through the perspective of that field. A mathematician is one who is able to see the world and to think mathematically. A historian is one who is able to see the world and to think historically.

Wonder, then, is not sustained by the accumulation of facts. It is sustained by the opening up of new vistas–historical, philosophical, mathematical, scientific, etc.–on reality that continually reveal its depth, complexity, and beauty.

Maybe it’s also the case that wonder must be sustained by love. Philosophy, to which wonder ought to lead, is etymologically the “love of wisdom.” Absent that love, the wonder dissipates and leaves behind no fruit. This possibility brings to mind a passage from an Iris Murdoch novel, The Sovereignty of Good, in which the main character describes the work of learning Russian:

“I am confronted by an authoritative structure which commands my respect. The task is difficult and the goal is distant and perhaps never entirely attainable. My work is a progressive revelation of something which exists independently of me…. Love of Russian leads me away from myself towards something alien to me, something which my consciousness cannot take over, swallow up, deny or make unreal. The honesty and humility required of the student — not to pretend to know what one does not know — is the preparation for the honesty and humility of the scholar who does not even feel tempted to suppress the fact which damns his theory.”

We should get it out of our heads that education is chiefly or merely about training minds. It must address the whole human person. The mind, yes, but also the heart, the eyes, the ears, and the hands. It is a matter of character and habits and virtues and loves. The most serious reductionism is that which reduces education to the transfer of information. In which case, it makes little difference whether that information is of the humanistic or scientific variety.

As Hubert Dreyfuss pointed out serval years ago in his discussion of online education, the initial steps of skill acquisition most closely resemble the mere transmission of information. But passing beyond these early stages of education in any discipline involves the presence of another human being for a variety of significant reasons. Not least of these is the fact that we must come to love what we are learning and our loves tend to be formed in the context of personal relationships. They are caught, as it were, from another who has already come to love a certain kind of knowledge or a certain way of approaching the world embedded in a particular discipline.

What then is the sum of this meandering post? First, the sciences and the humanities are partners, but not primarily partners in the accomplishment of their own respective goals. They are partners in the education of human beings that are alive to fullness of the world they inhabit. Secondly, if that work is to yield fruit, then education in both the sciences and the humanities must be undertaken with a view to full complexity of the human person and the motives that drive and sustain the meaningful pursuit of knowledge. And that, I know, is far easier said than done.

Why Did Curiosity’s Landing Generate So Much Attention?

msl20110602_PIA14175-hpfeat

Take a look at the image to the right. Care to guess what you’re looking at?

I might very well be wrong, but I imagine that more than a few people would guess that they are looking at an image of Curiosity during its descent stage onto the surface of Mars — there’s the chute and there’s the capsule. This would be a reasonable conjecture, but also an incorrect one.

The image is of another rover, Phoenix, making its final descent onto the surface of Mars on May 25, 2008. Remember that one? I didn’t. It was, as it turns out, the first time the landing of one spacecraft on the surface of a planet was photographed by another spacecraft.

Now, I don’t want to make too much of my own forgetfulness or inattentiveness, but I was surprised to learn that Curiosity’s successful landing was the sixth such success in NASA’s history.

Viking 1 and Viking 2 each landed on Mars in 1976. The Mars Pathfinder and its rover, Sojourner landed in 1997. Two rovers, Spirit and Opportunity, landed in 2004. And finally, Phoenix, landed in 2007. (Check out a great info graphic of missions to Mars here. H/T Jeremy Antley.)

I’ve been a low-level space geek since I was child, and so I wasn’t entirely oblivious to this history. But being reminded of it did make the publicity surrounding Curiosity, well, curious.

Why was it that Curiosity’s landing was received with such fanfare? And why was it that it evoked such a powerful emotional response when previous landings, recent and impressive, had not?

Naturally, I took to Twitter with my query. Now, I don’t have nearly enough followers to make this as fruitful a venture as it might be for others, but, thanks to some retweets, it did return a few interesting suggestions.

Here was my initial tweet:

My suggestions were arrived at as follows. Size? Curiosity was by far the largest such vehicle. Degree of difficulty? Given its size, landing the rover safely necessitated an ingenious and elaborate multi-stage landing system. Trailer? I was referring to the dramatically titled video produced by NASA, “Seven Minutes of Terror,” depicting Curiosity’s planned descent. Social media? Well, for starters, Curiosity has its own twitter feed: MarsCuriosity.

So what kind of responses did I get? A few suggested what one person neatly summed up as “Space shuttle ennui.” In other words, Curiosity stepped in to fill a gap created by the retirement of the space shuttles — the last voyage of which tapped the technological sublime.

Relatedly, it was suggested that Curiosity filled a void created by the absence of any inspiring visions for our future in relation to space, or perhaps for any future.

Others pointed to some variation of the suggestions I offered. The complexity of the skycrane, social media coverage of the landing, and NASA’s improved self-promotion.

All of these seem to have some role to play in creating the event that was Curiosity’s landing — the “Seven Minutes of Terror” video hooked me — but I’m not sure that any of these alone, or even all of them together satisfactorily explain the phenomenon.

Here’s my take as it stands: It mostly is a case of the distinctly American technological sublime. This actually draws both streams of responses together: those that focused on the filling up of some collectively felt absence and those that emphasized the sophistication of the technology involved. David Nye’s account of the American technological sublime included both the existential experience of a technology that left one in awe and a participation in what amounted to a (Durkheimian) civil religion.

As a civil religious experience, the technological sublime provided a sense of national identity, purpose, and destiny. Experiences of the technological sublime — whether seeing the first railroads or the first electrified cityscapes, standing before the Hoover Dam, or witnessing a Saturn V launch, to name a few instances — forged the collective national character. They were rituals of solidarity. They inspired confidence in what we could accomplish and, therefore, hope for the future.

It could be argued that we are a nation casting about for renewed unity and sense of purpose. There is a felt need for what the technological sublime had supplied earlier generations of Americans. The consumer technologies which surround us today, impressive as they are in many respects, don’t quite have the capacity to elicit the sublime experience. In part, because they merely (and “merely” is not  quite the right word there) enhance or repackage what earlier technologies had first accomplished long ago. The latest cell phone technology can never compete with the experience of hearing a voice over the telephone for the first time, for example.

Curiosity stepped into this fractured and disillusioned cultural milieu and it was dynamic and extraordinary enough to evoke the sublime response. Remember the tears that flowed at mission control. It was the right technology, at the right time. And social media supplied the sense of collective experience so critical to its civil religious function.

So now your thoughts. Did you tune in to the live feed from mission control? Did you give Curiosity more attention than you usually would to the space program? Were you moved by the whole thing? Did you notice that this was the case for others even if wasn’t quite your reaction? Pure media hype? Sheer awesomeness? To what does Curiosity owe its vaunted status?

Maybe its all just Curiosity’s WALL•E-esque anthropomorphic charm. Or, am I the only one that sees that?

Beware of Reporters Bearing the “Latest Studies”

Some while ago I posted a pretty funny take on the Science News Cycle courtesy of PHD Comics. Every now and then I link to it when posting about some “recent study” because it is helpful to remember the distortions that can take place as information works its way from the research lab to the evening news.

I was reminded of that comic again when I read this bit from a story in New Scientist:

“To find out if behaviour in a virtual world can translate to the physical world, Ahn randomly assigned 47 people either to inhabit a lumberjack avatar and cut down virtual trees with a chainsaw, or to simply imagine doing so while reading a story. Those who did the former used fewer napkins (five instead of six, on average) to clean up a spill 40 minutes later, showing that the task had made them more concerned about the environment.”

Surely something has been lost in translation from data to conclusion, no? The author notes that this is from an “unpublished” study which gives me renewed confidence in the peer review process.

Well, that is until I read a somewhat alarming post by neuroscientist Daniel Bor, “The dilemma of weak neuroimaging papers”, which contains this summation and query:

“Okay, so we’re stuck with a series of flawed publications, imperfect education about methods, and a culture that knows it can usually get away with sloppy stats or other tricks, in order to boost publications.  What can help solve some of these problems?”

All in all, we do well to proceed with healthy skepticism.

Another Controversial “We Can, But Ought We?” Issue

A few days ago I posted links to two pieces that raised the question of whether we ought do what new technologies may soon allow us to do. Both of those case, interestingly, revolved around matters of time. One explored the possibility of erasing memories and the other discussed the possibility of harnessing the predictive power of processing massive amounts of data. The former touched on our relationship to the past, the latter on our relationship to the future.

A recent article in Canada’s The Globe and Mail tackles the thorny questions surrounding genetic screening and selection. Here’s the set up:

Just as Paracelsus wrote that his recipe worked best if done in secret, modern science is quietly handing humanity something the quirky Renaissance scholar could only imagine: the capacity to harness our own evolution. We now have the potential to banish the genes that kill us, that make us susceptible to cancer, heart disease, depression, addictions and obesity, and to select those that may make us healthier, stronger, more intelligent.

The question is, should we?

That is the question. As you might imagine, there are vocal advocates and critics. I’ve steered clear of directly addressing issues related to reproductive technologies because I am not trained as a bioethicist. But these are critical issues and the vast majority of those who will make real-world decisions related to the possibilities opened up by new bio-genetic technologies will have no formal training in bioethics either. It’s best we all start thinking about these questions.

According to one of the doctors cited in the article, “We are not going to slow the technology, so the question is, how do we use it?” He added, “Twenty years from now, you have to wonder if all babies will be conceived by IVF.”

Of course, it is worth considering what is assumed when someone claims that we are not going to slow down the technology. But as is usually the case, the technical capabilities are outstripping the ethical debate — at least, the sort of public debate that could theoretically shape the social consensus.

About three paragraphs into the article, I began thinking about the film Gattaca which I’ve long considered an excellent reflection on the kinds of issues involved in genetic screening. Two or three more paragraphs into the article, then, I was pleased to see the film mentioned. I’ve suggested before (here and here) that stories are often our best vehicles for ethical reflection, they make abstract arguments concrete. For example, one of the best critiques of utilitarianism that I have read was Ursula le Guin’s short story, “Those Who Walk Away From Omelas.” Gattaca is a story that serves a similar function in relation to genetic screening and selection.  I’d recommend checking the movie out if you haven’t seen it.

Knowing Without Understanding: Our Crisis of Meaning?

In the off chance that David Weinberger had stumbled upon my slightly snarky remarks on an interview he gave at Salon, he might very well have been justified in concluding that I missed his point. In my defense, that point didn’t exactly get across in the interview which, taken alone, is still decidedly underwhelming. But the excerpt from Weinberger’s book that I subsequently came across at The Atlantic did address the very point I took issue with in my initial post — the question of meaning.

In the interview, Weinberger commented on the inadequacy of a view of knowledge that conceived of the work of knowing as a series of successively reductive steps moving from data to information to understanding and finally to wisdom. As he described it, the progression was understood as a process of filtering the useful from superfluous, or finding the proverbial needle in a haystack. Earlier in the interview he had referred to our current filter problem, i.e., our filters in the digital age do not filter anything out.

At the time this reminded me of Nicholas Carr’s comments some while ago on Clay Shirky’s approach to the problem of information overload. Shirky argued that our problem is not information overload but, rather, filter failure. We just haven’t developed adequate filters for the new digital information environment. Carr, rightly I believe, argued that our problem is not that our filters are inadequate, it’s that they are too good. We don’t have a needle in a haystack problem, we have a stack of needles.

This is analogous to the point Weinberger makes about filters: “Digital filters don’t remove anything; they only reduce the number of clicks that it takes to get to something.”

All of this comes across more coherently and compellingly in the book excerpt. There Weinberger deals directly with the significance of the book’s title, Too Big to Know. Science now operates with data sets that do not yield to human understanding. Computers are able to process the data and produce workable models that make the data useful, but we don’t necessarily understand why. We have models, but not theories; hence the title of the excerpt in The Atlantic, “To Know, but Not Understand.”

Returning to my initial post, I criticized the manner in which Weinberger framed the movement from information to wisdom on the grounds that it took no account of meaning. In my estimation, moving from information to wisdom was not merely a matter of reducing a sea of information to a manageable set of knowledge that can be applied; it was also a matter of deriving meaning from the information and the construction of meaning cannot be abstract from individual human beings and their lived experience.

Now, let me provide Weinberger’s rejoinder for him: The question of creating meaning at the level of the individual is moot. When dealing with the amount of data under consideration, meaning is no longer an option. We are in the position of being able to act without understanding; we can do what we cannot understand.

The problem, if indeed we can agree that it is a problem, of doing without understanding is not a unique consequence of the digital age and the power of supercomputers to amass and crunch immense amounts of data. As I wrote in the first of a still unfinished triptych of posts, Hannah Arendt expressed similar concerns over half a century ago:

Writing near the midpoint of the last century, Hannah Arendt worried that we were losing the ability “to think and speak about the things which nevertheless we are able to do.” The advances of science were such that representing what we knew about the world could be done only in the language of mathematics, and efforts to represent this knowledge in a publicly meaningful and accessible manner would become increasingly difficult, if not altogether impossible.  Under such circumstances speech and thought would part company and political life, premised as it is on the possibility of meaningful speech, would be undone.  Consequently, “it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking.”

But the situation was not identical. In Arendt’s scenario, there were still a privileged few that could meaningfully understand the science. It was a problem posed by complexity and some few brilliant minds could still grasp what the vast majority of us could not. What Weinberger describes is a situation in which no human mind is able to meaningfully understand the phenomenon. It is a matter of complexity, yes, but it is irredeemably aggravated by the magnitude and scale of the data involved.

I’ve long thought that many of our discontents stem from analogous problems of scale. Joseph Stalin allegedly claimed that, “The death of one man is a tragedy, the death of millions is a statistic.” Whether Stalin in fact said this or not, the line captures a certain truth. There is a threshold of scale at which we pass from that which we can meaningfully comprehend to that which blurs into the undistinguishable. The gap, for example, between public debt of $1 trillion and $3 trillion dollars is immense, but I suspect that for most of us the difference at this scale is no longer meaningful. It might as well be $20 trillion, it is all the same, which is to say it is equally unfathomable. And this is to say nothing of the byzantine quality of the computerized global financial industry as a whole.

In a recent post about the opacity of the banking system, Steve Waldman concluded as follows:

“This is the business of banking. Opacity is not something that can be reformed away, because it is essential to banks’ economic function of mobilizing the risk-bearing capacity of people who, if fully informed, wouldn’t bear the risk. Societies that lack opaque, faintly fraudulent, financial systems fail to develop and prosper. Insufficient economic risks are taken to sustain growth and development. You can have opacity and an industrial economy, or you can have transparency and herd goats . . . .

Nick Rowe memorably described finance as magic. The analogy I would choose is finance as placebo. Financial systems are sugar pills by which we collectively embolden ourselves to bear economic risk. As with any good placebo, we must never understand that it is just a bit of sugar. We must believe the concoction we are taking to be the product of brilliant science, the details of which we could never understand. The financial placebo peddlers make it so.”

In a brief update, Waldman added,

“I have presented an overly flattering case for the status quo here. The (real!) benefits to opacity that I’ve described must be weighed against the profound, even apocalyptic social costs that obtain when the placebo fails, especially given the likelihood that placebo peddlars will continue their con long after good opportunities for investment at scale have been exhausted.”

Needless to say, this is a less than comforting set of circumstances. Yet, “apocalyptic social costs” are not my main concern at the moment. Rather it is what we might, perhaps hyperbolically, call the apocalyptic psychic costs incurred by living in a time during which substantial swaths of experience are rendered unintelligible.

I appreciate Waldman’s placebo analogy, it gets at an important dimension of the situation, but Rowe’s analogy to magic is worth retaining. If you’ve been reading here for awhile, you’ll remember a handful of posts along the way that draw an analogy between magic and technology. It is an observation registered by C. S. Lewis, Lewis Mumford, and Jacque Ellul among others, and it is considered at book length by Richard Stivers. The analogy is taken in various directions, but what strikes me is the manner in which it troubles our historical narratives.

We often think of the whole of the pre-modern era as an age dominated by magical, which is to say unscientific thinking. Beginning with the Renaissance and continuing through the complex historical developments we gloss as the Scientific Revolution and the Enlightenment, we escape the realm of magic into the arena of science. And yet, it would seem that at the far end of that trajectory, taking it uncritically at face value, there is a reversion to magical thinking. It is, of course, true that there is a substantial difference — our magic “works.” But at the phenomenological level, the difference may be inconsequential. Our thinking may not be magical in the same way, but much of our doing proceeds as if by magic, without our understanding.

I suspect this is initially wondrous and enchanting, but over time it is finally unsettling and alienating.

We are all of us, even the brightest among us, embedded in systems we understand vaguely and partially at best. Certain few individuals understand certain few aspects of the whole, but no one understands the whole. And, it would seem, that the more we are able to know the less we are capable of understanding. Consider the much discussed essay by MIT physicist Alan Lightman in Harper’s. Take time to read the whole, but here is the conclusion:

“That same uncertainty disturbs many physicists who are adjusting to the idea of the multiverse. Not only must we accept that basic properties of our universe are accidental and uncalculable. In addition, we must believe in the existence of many other universes. But we have no conceivable way of observing these other universes and cannot prove their existence. Thus, to explain what we see in the world and in our mental deductions, we must believe in what we cannot prove.

Sound familiar? Theologians are accustomed to taking some beliefs on faith. Scientists are not. All we can do is hope that the same theories that predict the multiverse also produce many other predictions that we can test here in our own universe. But the other universes themselves will almost certainly remain a conjecture.

‘We had a lot more confidence in our intuition before the discovery of dark energy and the multiverse idea,’ says Guth. There will still be a lot for us to understand, but we will miss out on the fun of figuring everything out from first principles.'”

Faith? “All we can do is hope…”?

We have traveled from magic to magic and from faith to faith through an interval of understanding. Of course, it is possible to conclude that we’ve always failed to understand, it is just that now we know we don’t understand. Having banked heavily on a specific type of understanding and the mastery it could yield, we appear now to have come up at the far end against barriers to understanding and meaningful action. And in this sense we may be, from a certain perspective, worse off. The acknowledged limits of our knowing and understanding in premodern setting took their place within a context of intelligibility; the lack of understanding was itself rendered meaningful within the larger metaphysical picture of reality. The unfolding awareness of the limits of our knowing takes place within a context in which intelligibility was staked on knowing and understanding, in which there was no metaphysical space for mystery as it were. They acted meaningfully in the context of what appears from our perspective as a deep ignorance; it now seems that we are consigned to act without meaning in the context of a surfeit of knowledge.

I’m tempted to conclude by suggesting that the last metaphysical shock to arise from the Promethean enterprise may then be the startled recognition of the Hephaestean chains. But that may be too glib and these are serious matters. Too serious, certainly, to tie up neatly at the end of an off the cuff morning blog post. This was all, as they say, shot from the hip. I welcome any thoughts you might have on any of this.