Another Controversial “We Can, But Ought We?” Issue

A few days ago I posted links to two pieces that raised the question of whether we ought do what new technologies may soon allow us to do. Both of those case, interestingly, revolved around matters of time. One explored the possibility of erasing memories and the other discussed the possibility of harnessing the predictive power of processing massive amounts of data. The former touched on our relationship to the past, the latter on our relationship to the future.

A recent article in Canada’s The Globe and Mail tackles the thorny questions surrounding genetic screening and selection. Here’s the set up:

Just as Paracelsus wrote that his recipe worked best if done in secret, modern science is quietly handing humanity something the quirky Renaissance scholar could only imagine: the capacity to harness our own evolution. We now have the potential to banish the genes that kill us, that make us susceptible to cancer, heart disease, depression, addictions and obesity, and to select those that may make us healthier, stronger, more intelligent.

The question is, should we?

That is the question. As you might imagine, there are vocal advocates and critics. I’ve steered clear of directly addressing issues related to reproductive technologies because I am not trained as a bioethicist. But these are critical issues and the vast majority of those who will make real-world decisions related to the possibilities opened up by new bio-genetic technologies will have no formal training in bioethics either. It’s best we all start thinking about these questions.

According to one of the doctors cited in the article, “We are not going to slow the technology, so the question is, how do we use it?” He added, “Twenty years from now, you have to wonder if all babies will be conceived by IVF.”

Of course, it is worth considering what is assumed when someone claims that we are not going to slow down the technology. But as is usually the case, the technical capabilities are outstripping the ethical debate — at least, the sort of public debate that could theoretically shape the social consensus.

About three paragraphs into the article, I began thinking about the film Gattaca which I’ve long considered an excellent reflection on the kinds of issues involved in genetic screening. Two or three more paragraphs into the article, then, I was pleased to see the film mentioned. I’ve suggested before (here and here) that stories are often our best vehicles for ethical reflection, they make abstract arguments concrete. For example, one of the best critiques of utilitarianism that I have read was Ursula le Guin’s short story, “Those Who Walk Away From Omelas.” Gattaca is a story that serves a similar function in relation to genetic screening and selection.  I’d recommend checking the movie out if you haven’t seen it.

Knowing Without Understanding: Our Crisis of Meaning?

In the off chance that David Weinberger had stumbled upon my slightly snarky remarks on an interview he gave at Salon, he might very well have been justified in concluding that I missed his point. In my defense, that point didn’t exactly get across in the interview which, taken alone, is still decidedly underwhelming. But the excerpt from Weinberger’s book that I subsequently came across at The Atlantic did address the very point I took issue with in my initial post — the question of meaning.

In the interview, Weinberger commented on the inadequacy of a view of knowledge that conceived of the work of knowing as a series of successively reductive steps moving from data to information to understanding and finally to wisdom. As he described it, the progression was understood as a process of filtering the useful from superfluous, or finding the proverbial needle in a haystack. Earlier in the interview he had referred to our current filter problem, i.e., our filters in the digital age do not filter anything out.

At the time this reminded me of Nicholas Carr’s comments some while ago on Clay Shirky’s approach to the problem of information overload. Shirky argued that our problem is not information overload but, rather, filter failure. We just haven’t developed adequate filters for the new digital information environment. Carr, rightly I believe, argued that our problem is not that our filters are inadequate, it’s that they are too good. We don’t have a needle in a haystack problem, we have a stack of needles.

This is analogous to the point Weinberger makes about filters: “Digital filters don’t remove anything; they only reduce the number of clicks that it takes to get to something.”

All of this comes across more coherently and compellingly in the book excerpt. There Weinberger deals directly with the significance of the book’s title, Too Big to Know. Science now operates with data sets that do not yield to human understanding. Computers are able to process the data and produce workable models that make the data useful, but we don’t necessarily understand why. We have models, but not theories; hence the title of the excerpt in The Atlantic, “To Know, but Not Understand.”

Returning to my initial post, I criticized the manner in which Weinberger framed the movement from information to wisdom on the grounds that it took no account of meaning. In my estimation, moving from information to wisdom was not merely a matter of reducing a sea of information to a manageable set of knowledge that can be applied; it was also a matter of deriving meaning from the information and the construction of meaning cannot be abstract from individual human beings and their lived experience.

Now, let me provide Weinberger’s rejoinder for him: The question of creating meaning at the level of the individual is moot. When dealing with the amount of data under consideration, meaning is no longer an option. We are in the position of being able to act without understanding; we can do what we cannot understand.

The problem, if indeed we can agree that it is a problem, of doing without understanding is not a unique consequence of the digital age and the power of supercomputers to amass and crunch immense amounts of data. As I wrote in the first of a still unfinished triptych of posts, Hannah Arendt expressed similar concerns over half a century ago:

Writing near the midpoint of the last century, Hannah Arendt worried that we were losing the ability “to think and speak about the things which nevertheless we are able to do.” The advances of science were such that representing what we knew about the world could be done only in the language of mathematics, and efforts to represent this knowledge in a publicly meaningful and accessible manner would become increasingly difficult, if not altogether impossible.  Under such circumstances speech and thought would part company and political life, premised as it is on the possibility of meaningful speech, would be undone.  Consequently, “it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking.”

But the situation was not identical. In Arendt’s scenario, there were still a privileged few that could meaningfully understand the science. It was a problem posed by complexity and some few brilliant minds could still grasp what the vast majority of us could not. What Weinberger describes is a situation in which no human mind is able to meaningfully understand the phenomenon. It is a matter of complexity, yes, but it is irredeemably aggravated by the magnitude and scale of the data involved.

I’ve long thought that many of our discontents stem from analogous problems of scale. Joseph Stalin allegedly claimed that, “The death of one man is a tragedy, the death of millions is a statistic.” Whether Stalin in fact said this or not, the line captures a certain truth. There is a threshold of scale at which we pass from that which we can meaningfully comprehend to that which blurs into the undistinguishable. The gap, for example, between public debt of $1 trillion and $3 trillion dollars is immense, but I suspect that for most of us the difference at this scale is no longer meaningful. It might as well be $20 trillion, it is all the same, which is to say it is equally unfathomable. And this is to say nothing of the byzantine quality of the computerized global financial industry as a whole.

In a recent post about the opacity of the banking system, Steve Waldman concluded as follows:

“This is the business of banking. Opacity is not something that can be reformed away, because it is essential to banks’ economic function of mobilizing the risk-bearing capacity of people who, if fully informed, wouldn’t bear the risk. Societies that lack opaque, faintly fraudulent, financial systems fail to develop and prosper. Insufficient economic risks are taken to sustain growth and development. You can have opacity and an industrial economy, or you can have transparency and herd goats . . . .

Nick Rowe memorably described finance as magic. The analogy I would choose is finance as placebo. Financial systems are sugar pills by which we collectively embolden ourselves to bear economic risk. As with any good placebo, we must never understand that it is just a bit of sugar. We must believe the concoction we are taking to be the product of brilliant science, the details of which we could never understand. The financial placebo peddlers make it so.”

In a brief update, Waldman added,

“I have presented an overly flattering case for the status quo here. The (real!) benefits to opacity that I’ve described must be weighed against the profound, even apocalyptic social costs that obtain when the placebo fails, especially given the likelihood that placebo peddlars will continue their con long after good opportunities for investment at scale have been exhausted.”

Needless to say, this is a less than comforting set of circumstances. Yet, “apocalyptic social costs” are not my main concern at the moment. Rather it is what we might, perhaps hyperbolically, call the apocalyptic psychic costs incurred by living in a time during which substantial swaths of experience are rendered unintelligible.

I appreciate Waldman’s placebo analogy, it gets at an important dimension of the situation, but Rowe’s analogy to magic is worth retaining. If you’ve been reading here for awhile, you’ll remember a handful of posts along the way that draw an analogy between magic and technology. It is an observation registered by C. S. Lewis, Lewis Mumford, and Jacque Ellul among others, and it is considered at book length by Richard Stivers. The analogy is taken in various directions, but what strikes me is the manner in which it troubles our historical narratives.

We often think of the whole of the pre-modern era as an age dominated by magical, which is to say unscientific thinking. Beginning with the Renaissance and continuing through the complex historical developments we gloss as the Scientific Revolution and the Enlightenment, we escape the realm of magic into the arena of science. And yet, it would seem that at the far end of that trajectory, taking it uncritically at face value, there is a reversion to magical thinking. It is, of course, true that there is a substantial difference — our magic “works.” But at the phenomenological level, the difference may be inconsequential. Our thinking may not be magical in the same way, but much of our doing proceeds as if by magic, without our understanding.

I suspect this is initially wondrous and enchanting, but over time it is finally unsettling and alienating.

We are all of us, even the brightest among us, embedded in systems we understand vaguely and partially at best. Certain few individuals understand certain few aspects of the whole, but no one understands the whole. And, it would seem, that the more we are able to know the less we are capable of understanding. Consider the much discussed essay by MIT physicist Alan Lightman in Harper’s. Take time to read the whole, but here is the conclusion:

“That same uncertainty disturbs many physicists who are adjusting to the idea of the multiverse. Not only must we accept that basic properties of our universe are accidental and uncalculable. In addition, we must believe in the existence of many other universes. But we have no conceivable way of observing these other universes and cannot prove their existence. Thus, to explain what we see in the world and in our mental deductions, we must believe in what we cannot prove.

Sound familiar? Theologians are accustomed to taking some beliefs on faith. Scientists are not. All we can do is hope that the same theories that predict the multiverse also produce many other predictions that we can test here in our own universe. But the other universes themselves will almost certainly remain a conjecture.

‘We had a lot more confidence in our intuition before the discovery of dark energy and the multiverse idea,’ says Guth. There will still be a lot for us to understand, but we will miss out on the fun of figuring everything out from first principles.'”

Faith? “All we can do is hope…”?

We have traveled from magic to magic and from faith to faith through an interval of understanding. Of course, it is possible to conclude that we’ve always failed to understand, it is just that now we know we don’t understand. Having banked heavily on a specific type of understanding and the mastery it could yield, we appear now to have come up at the far end against barriers to understanding and meaningful action. And in this sense we may be, from a certain perspective, worse off. The acknowledged limits of our knowing and understanding in premodern setting took their place within a context of intelligibility; the lack of understanding was itself rendered meaningful within the larger metaphysical picture of reality. The unfolding awareness of the limits of our knowing takes place within a context in which intelligibility was staked on knowing and understanding, in which there was no metaphysical space for mystery as it were. They acted meaningfully in the context of what appears from our perspective as a deep ignorance; it now seems that we are consigned to act without meaning in the context of a surfeit of knowledge.

I’m tempted to conclude by suggesting that the last metaphysical shock to arise from the Promethean enterprise may then be the startled recognition of the Hephaestean chains. But that may be too glib and these are serious matters. Too serious, certainly, to tie up neatly at the end of an off the cuff morning blog post. This was all, as they say, shot from the hip. I welcome any thoughts you might have on any of this.

Pattern Recognition: The Genius of our Time?

What counts for genius in our times?  Is it the same as what has always counted for genius?  Or, are there shifting criteria that reflect the priorities and affordances of a particular age?

Mary Carruthers  opens The Book of Memory, her study of  memory in medieval culture, with a contrast between Thomas Aquinas and Albert Einstein.  Both were regarded as the outstanding intellects of their era; each elicited enthusiastic, wonder-struck praise from his contemporaries.  Carruthers cites a letter by an associate of each man as typical of the praise that each received.  Summing up both she writes:

Of Einstein: ingenuity, intricate reasoning, originality, imagination, essentially new ideas couples with the notion that to achieve truth one must err of necessity, deep devotion to and understanding of physics, obstinacy, vital force, single-minded concentration, solitude.  Of Thomas Aquinas: subtlety and brilliance of intellect, original discoveries coupled with deep understanding of Scripture, memory, nothing forgotten and knowledge ever-increasing, special grace, inward recourse, single-minded concentration, intense recollection, solitude.

Carruthers goes on to note how similar the lists of qualities are “in terms of what they needed for their compositional activity (activity of thought), the social isolation required by each individual, and what is perceived to be the remarkable subtlety, originality, and understanding of the product of such reasoning.”  The difference, appropriate to the object of Carruther’s study, lies in the relationship between memory and the imagination.

Carruthers is eager to remind us that “human beings did not suddenly acquire imagination and intuition with Coleridge, having previously been poor clods.”  But there is a difference in the way these qualities were understood:

The difference is that whereas now geniuses are said to have creative imagination which they express in intricate reasoning and original discovery, in earlier times they were said to have richly retentive memories, which they expressed in intricate reasoning and original discovery.

This latter perspective, the earlier Medieval perspective, is not too far removed from the connections between memory and creativity drawn by Jim Holt based on the experiences of French mathematician, Henri Poincare. We might also note that the changing status of memory within the ecology of genius is owed at least in part to the evolution of technologies which supplement the memory.  Aquinas, working in a culture for which books were still relatively scarce, would have needed a remarkably retentive memory to continue working with the knowledge he acquired through reading.  This becomes less of a priority for post-Gutenberg society.

Mostly, however, Carruthers’ comparison suggested to me the question of what might count for genius in our own time.  We are not nearly so far removed from Einstein as Einstein was from Aquinas, but a good deal has changed nonetheless which makes the question at least plausible.  I suspect that, as was the case between Aquinas and Einstein, there will be a good deal of continuity, a kind of base-line of genius perhaps.  But that baseline makes the shifts in emphasis all the more telling.

I don’t have a particular model for contemporary genius in mind, so this is entirely speculative, but I wonder if today, or in coming years, we might not transfer some of the wonder previously elicited by memory and imagination to something like rapid pattern recognition.  I realize there is significant overlap within these categories.  Just as memory and imagination are related in important ways, so pattern recognition is also implicit in both and has always been an important ability.  So again, it is a matter of emphasis.  But it seems to me that the ability to rapidly recognize, or even generate meaningful patterns from an undifferentiated flow of information may be the characteristic of intelligence most suited to our times.

In Aquinas’ day the emphasis was on the memory needed in order to retain the knowledge which was relatively scarce. In Einstein’s time the emphasis was on the ability to jump out of established patterns of thought generated by abundant, but sometimes static knowledge.  In our day, we are overwhelmed by a torrent of easily available and ever shifting information, we won’t quite say knowledge.  Under these conditions memory loses its pride of place, as does perhaps imagination.  However, the ability to draw together disparate pieces of information or to connect seemingly unrelated points of data into a meaningful pattern that we might count as knowledge now becomes a dimension of human intelligence that may inspire comparable awe and admiration from culture drowning in noise.

Perhaps an analogy to wrap up:  Think of the constellations as instances of pattern recognition.  Lot’s of data points against the night sky drawn into patterns that are meaningful, useful, and beautiful to human beings.  For Aquinas the stars of knowledge might appear but for a moment and to recognize the pattern he had to hold in memory their location as he learned and remembered the location of other stars.  For Einstein many more stars had appeared and they remained steadily in his intellectual field of vision, seeing new patterns were old ones had been established was his challenge.

Today we might say that the night sky is not only full to overflowing, but the configuration is constantly shifting.  Our task is not necessarily to remember the location of few but fading stars, nor is it to see new patterns in a fuller but steady field.  It is to constantly recognize new and possibly temporary patterns in a full and flickering field of information. Those who are able to do this most effectively may garner the kind of respect earlier reserved for the memory of Aquinas and the imagination of Einstein.

For a different, but I think related take on a new form of thinking for our age that draws on the imagery of constellations I encourage you to take a look at this thread at Snark Market.

“Is Memory in the Brain?”

Most of us think of memory as something that goes on exclusively in our brains, but alongside of efforts to view cognition in general as an embodied and extended activity, some researchers have been arguing that memory also has a socially extended dimension.  David Manier is among those pushing our understanding of memory so as to encompass acts of social communication as remembering.

Manier’s 2004 article, “Is Memory in the Brain?  Remembering as Social Behavior,” published in Mind, Culture, and Activity seeks to establish social remembering as a legitimate and significant area of study for cognitive psychologists.  In order to this, Manier begins by challenging the dominant understanding of memory which construes memory as something located in the brain or as a faculty housed exclusively in the brain.

“Historians, anthropologists, and sociologists, often influenced by Halbwachs (1950/1980), have taken up the topic of collective memory, looking at ways that organizations preserve important aspects of the past, and ways that events of weighty historical importance (such as the Holocaust) become integrated into the collective identity of a group of people …. But among some psychologists, especially those whose emphasis is on neuroscientific approaches to memory, it is possible to detect a certain ambivalence toward this topic.” (251)

Manier intends to argue instead, “for the usefulness of conceptualizing remembering as social behavior, and for expanding the science of memory to include communicative acts.” (252)  Manier and his colleagues have conducted a number of studies into what he terms conversational remembering.  These studies take place in “naturalistic contexts,” that is everyday environments as opposed to the contrived laboratory environment in which most cognitive scientific research takes place.  Thus far, Manier’s studies suggest that the dynamics of social remembering shape the subsequent remembering of individual group members.

Manier briefly traces the history of the belief, most recently articulated by Tulving, that memory “has a home, even if still a hidden one, in the brain” back past recent neuroscientific discoveries to ancient Greece.  Plato operated with what Cropsey has termed an “obstetrical metaphor” according to which “the purpose of philosophy is to serve as a ‘midwife’ to the birth of ideas having germinal existence within the soul; in this sense, Plato saw knowing as involving an act of remembering.”  Thus memory was not conceived as mere storage of information, nor simply as a brain function, but “rather more like a journey, a quest in which conversations with a philosopher … can play a crucial role.”  (253)

According to Manier, this more conversational, dialogical, social conception of memory was displaced by Aristotle’s “emphasis on taxonomy” and his division of the soul into four faculties: the nutritive, the sensory, the locomotive, and the rational.  Memory, associated with imagination, was understood as a function of the sensory faculty through which one perceived images of things past.  While Aristotle did not maintain that what he had distinguished in theory was in fact distinguishable in reality, others who came after him where not so precise.  In Manier’s brief sketch, the notion of memory as a faculty located in the brain evolves through the Medieval heirs of Aristotle, to Locke, Thomas Reid, and then on to Gall and Spurzheim (founders of phrenology), Fechner, and Ebbinghaus.  (253-254)

Certain metaphors have also reinforced this “modular or topographical” view of memory:

“Often, the metaphors have been influenced by discussions of anatomy and physiology (… ‘the mental organ’ of language production – discussed by Chomsky …).  Moreover, the industrial revolution, with its production of heavy machinery, lent weight to an emphasis on metaphors about psychological ‘mechanisms.’  The development of computers spawned a host of new metaphors for cognitive psychology, including information processing, hardware and software, systems and subsystems, control processes, input and output, the computational architecture of mind, parallel distributed processing, …. (254)

Against the “mental topography” approach, Neisser has called for “ecological validity” which “asserts the imperative of understanding ‘everyday thinking’ rather than the study (preferred by many experimental psychologists) of how isolated individuals perform on contrived experiments conducted in carefully controlled laboratory settings.” (255)  Following Bruner, Manier goes on to characterize remembering as an “act of meaning” adding, “memory is something that we as humans do, that is, it is a meaningful action we perform in the sociocultural contexts that we take part in creating, and within which we live.”  Furthermore, “If it is correct to say that memory is something we do rather than something we have, it may be more appropriate to think of remembering as a kind of cognitive behavior ….” (256)

Now Manier articulates his chief claim, “remembering can be viewed as an act of communication.”  (257)  He aligns his claim with Gilbert Ryles’ earlier argument against the “tendency to view silent thoughts as somehow real thoughts, as opposed to the thoughts that we speak aloud.  By analogy, Manier suggests that not all remembering is silent remembering, and he offers the following definition:  “Remembering is a present communication of something past.”  He goes on to give various examples, all of which constitute acts of remembering:  solitary, private remembering; remembering in conversation with someone; and remembering through writing.  Each example was a remembrance of the same event, but each situation shifted what was remembered.  (258)

While some may argue that behind acts of remembrance there lays one’s “real memory” physically located in the brain, Manier suggests that the “neurophysiological configuration” is “only the material basis for real acts of remembering.”  Furthermore,

“This view of acts of remembering accords with the concept of distributed cognition, according to which we humans use the cultural tools that are available to us.  As Dennet, announced … we no more think with our brains than we hammer with our bare hands. And one of the important cultural tools we use in our thinking – and especially in our remembering – is group conversation.” (260)

Manier then provides a transcription of a family group conversation to illustrate how memories shift through the give and take of conversation, memories that presumably would not have been altered otherwise.  He concludes,

“Remembering is not only shaped by internal, cognitive processes.  When we reconstruct past events in the context of conversation, the conversational roles that are adopted by group members will affect what is remembered.  Moreover, conversational remembering can be shaped by other influences.  These influences on remembering – as well as a host of other sociocultural facts – tend to be missed by an approach that limits itself to what goes on in the brain.”

Manier, David.  “Is Memory in the Brain? Remembering as Social Behavior” in Mind, Culture, and Activity, 11(4), 251-266.  2004.

Darkness, Depth, Dirtiness: Metaphors and the Body

In Metaphors We Live By, Lakoff and Johnson drew attention to the significant and often unnoticed work metaphors perform in our everyday use of language.  Once you start paying attention you realize that metaphors (and figurative language in general) are not merely the ornaments of speech employed by poets and other creative types; they are an indispensable element of our most basic attempts to represent our experience of the world with words.  Lakoff and Johnson also suggested that many of our most basic metaphors (up is good/down is bad; heavy is serious, light is not) are grounded in our embodied experience of reality.  Did we not stand erect, for example, we’d have a very different set of metaphors.

It’s been thirty years since Metaphors We Live By was published, but it has been in the last few that studies have been confirming the link between embodied experience in the world and our metaphorical language.  Many of these studies were helpfully summarized in the January 2010 issue of the Observer, a publication of the Association for Psychological Science.  In a short article titled  “The Body of Knowledge:  Understanding Embodied Cognition,” Barbara Isanski and Catherine West describe a series of experimental studies that establish links between our bodily experience and our metaphorical language.  Here’s a sampling of some of those links:

  • Temperature and social relationships — think “cold shoulder”
  • Cleanness and moral purity — think Lady MacBeth or Pontius Pilate
  • Color and morality — think black is bad
  • Weight and judgment/seriousness — think “a heavy topic” or “deep issue”
  • Movement and progress/achievement — think “forward looking,” “taking a step back”

Most interesting perhaps are the elaborate set ups of the experiments that attempted to get at these connections and the article does nice job of succinctly describing each. For example:

In a recent study by Nils B. Jostmann (University of Amsterdam), Daniël Lakens (Utrecht University), and Thomas W. Schubert (Instituto Superior de Ciências do Trabalho e da Empresa, Lisbon), volunteers holding a heavy clipboard assigned more importance to opinions and greater value to foreign currencies than volunteers holding lighter-weight clipboards did. A lot of physical strength is required to move heavy objects around; these results suggest that in a similar way, important issues may require a lot of cognitive effort to be dealt with.

As always bear in mind the nature of “recent studies,” but it is not too surprising to learn that our embodied experience is at the root of our way of talking and thinking about the world.