What Are We Talking About When We Talk About Technology?

I’ve been of two minds with regards to the usefulness of the word technology. One of those two minds has been more or less persuaded that the term is of limited value and, worse still, that it is positively detrimental to our understanding of the reality it ostensibly labels. The most thorough case for this position is laid out in a 2010 article by the historian of technology Leo Marx, “Technology: The Emergence of a Hazardous Concept.

Marx worried that the term technology was “peculiarly susceptible to reification.” The problem with reified phenomenon is that it acquires “a ‘phantom-objectivity,’ an autonomy that seems so strictly rational and all-embracing as to conceal every trace of its fundamental nature: the relation between people.” This false aura of autonomy leads in turn to “hackneyed vignettes of technologically activated social change—pithy accounts of ‘the direction technology is taking us’ or ‘changing our lives.’” According to Marx, such accounts are not only misleading, they are also irresponsible. By investing “technology” with causal power, they distract us from “the human (especially socioeconomic and political) relations responsible for precipitating this social upheaval.” It is these relations, after all, that “largely determine who uses [technologies] and for what purposes.” And, it is the human use of technology that makes all the difference, because, as Marx puts it, “Technology, as such, makes nothing happen.”[1]

As you might imagine, I find that Marx’ point compliments a critique of what I’ve called Borg Complex rhetoric. It’s easier to refuse responsibility for technological change when we can attribute it to some fuzzy, incohate idea of technology, or worse, what technology wants. That latter phrase is the title of a book by Kevin Kelly, and it may be the best example on offer of the problem Marx was combatting in his article.

But … I don’t necessarily find that term altogether useless or hazardous. For instance, some time ago I wrote the following:

“Speaking of online and offline and also the Internet or technology – definitions can be elusive. A lot of time and effort has been and continues to be spent trying to delineate the precise referent for these terms. But what if we took a lesson from Wittgenstein? Crudely speaking, Wittgenstein came to believe that meaning was a function of use (in many, but not all cases). Instead of trying to fix an external referent for these terms and then call out those who do not use the term as we have decided it must be used or not used, perhaps we should, as Wittgenstein put it, ‘look and see’ the diversity of uses to which the words are meaningfully put in ordinary conversation. I understand the impulse to demystify terms, such as technology, whose elasticity allows for a great deal of confusion and obfuscation. But perhaps we ought also to allow that even when these terms are being used without analytic precision, they are still conveying sense.”

As you know from previous posts, I’ve been working through Langdon Winner’s Autonomous Technology (1977). It was with a modicum of smug satisfaction, because I’m not above such things, that I read the following in Winner’s Introduction:

“There is, of course, nothing unusual in the discovery that an important term is ambiguous or imprecise or that it covers a wide diversity of situation. Wittgenstein’s discussion of ‘language games’ and ‘family resemblances’ in Philosophical Investigations illustrates how frequently this occurs in ordinary language. For many of our most important concepts, it is futile to look for a common element in the phenomena to which the concept refers. ‘Look and see and whether there is anything common to all.–For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that.'”

Writing in the late ’70s, Winner claimed, “Technology is a word whose time has come.” After a period of relative neglect or disinterest, “Social scientists, politicians, bureaucrats, corporate managers, radical students, as well as natural scientists and engineers, are now united in the conclusion that something we call ‘technology’ lies at the core of what is most troublesome in the condition of our world.”

To illustrate, Winner cites Allen Ginsburg — “Ourselves caught in the giant machine are conditioned to its terms, only holy vision or technological catastrophe or revolution break ‘the mind-forg’d manacles.'” — and the Black Panthers: “The spirit of the people is greater than the man’s technology.”

For starters, this is a good reminder to us that we are not the first generation to wrestle with the place of technology in our personal lives and in society at large. Winner was writing almost forty years ago, after all. And Winner rightly points out that his generation was not the first to worry about such matters either: “We are now faced with an odd situation in which one observer after another ‘discovers’ technology and announces it to the world as something new. The fact is, of course, that there is nothing novel about technics, technological change, or advanced technological societies.”

While he thinks that technology is a word “whose time has come,” he is not unaware of the sorts of criticisms articulated by Leo Marx. These criticisms had then been made of the manner in which Jacques Ellul defined technology, or, more precisely, la technique: “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.”

Against Ellul’s critics, Winner writes, “While Ellul’s addition of ‘absolute efficiency’ may cause us difficulties, his notion of technique as the totality of rational methods closely corresponds to the term technology as now used in everyday English. Ellul’s la technique and our technology both point to a vast, diverse, ubiquitous totality that stands at the center of modern culture.”

It is at this point that Winner references Wittgenstein in the paragraph cited above. He then acknowledges that the way in which technology tends to be used leads to the conclusion that “technology is everything and everything is technology.” In other words, it “threatens to mean nothing.”

But Winner sees in this situation something of interest, and here is where I’m particularly inclined to agree with him against critics like Leo Marx. Rather than seek to impose a fixed definition or banish the term altogether, we should see in this situation “an interesting sign.” It should lead us to ask, “What does the chaotic use of the term technology indicate to us?”

Here is how Winner answers that question: “[…] the confusion surrounding the concept ‘technology’ is an indication of a kind of lag in public language, that is, a failure of both ordinary speech and social scientific discourse to keep pace with the reality that needs to be discussed.”There may be a better way, but “at present our concepts fail us.”

Winner follows with a brief discussion of the unthinking polarity into which discussions of technology consequently fall: “discussion of the political implications of advanced technology have a tendency to slide into a polarity of good versus evil.” We might add that this is not only a problem with discussion of the political implications of advanced technology, it is also a problem with discussions of the personal implications of advanced technology.

Winner adds that there “is no middle ground to discuss such things,” we encounter either “total affirmation” or “total denial.” In Winner’s experience, ambiguity and nuance are hard to come by and any criticism, that is anything short of total embrace, meets with predictable responses: “You’re just using technology as a whipping boy,” or “You just want to stop progress and send us back to the Middle Ages with peasants dancing on the green.”[2]

While it may not be as difficult to find more nuanced positions today, in part because of the sheer quantity of easily accessible commentary, it still seems generally true that most popular discussions of technology tend to fall into either the “love it” or “hate it” category.

In the end, it may be that Winner and Marx are not so far apart after all. While Winner is more tolerant of the use of technology and finds that, in fact, its use tells us something important about the not un-reasonable anxieties of modern society, he also concludes that we need a better vocabulary with which to discuss all that gets lumped under the idea of technology.

I’m reminded of Alan Jacobs’ oft-repeated invocation of Bernard Williams’s adage, “We suffer from a poverty of concepts.” Indeed, indeed. It is this poverty of concepts that, in part, explains the ease with which discussions of technology become mired in volatile love it or hate it exchanges. A poverty of concepts short circuits more reasonable discussion. Debate quickly morphs into acrimony because in the absence of categories that might give reason a modest grip on the realities under consideration the competing positions resolve into seemingly subjective expressions of personal preference and, thus, criticism becomes offensive.[3]

So where does this leave us? For my part, I’m not quite prepared to abandon the word technology. If nothing else it serves as a potentially useful Socratic point of entry: “So, what exactly do you mean by technology?” It does, to be sure, possess a hazardous tendency. But let’s be honest, what alternatives do we have left to us? Are we to name every leaf because speaking of leaves obscures the multiplicity and complexity of the phenomena?

That said, we cannot make do with technology alone. We should seek to remedy that poverty of our concepts. Much depends on it.

Of course, the same conditions that led to the emergence of the more recent expansive and sometimes hazardous use of the word technology are those that make it so difficult to arrive at a richer more useful vocabulary. Those conditions include but are not limited to the ever expanding ubiquity and complexity of our material apparatus and of the technological systems and networks in which we are enmeshed. The force of these conditions was first felt in the wake of the industrial revolution and in the ensuing 200 years it has only intensified.

To the scale and complexity of steam-powered industrial machinery was added the scale and complexity of electrical systems, global networks of transportation, nuclear power, computers, digital devices, the Internet, global financial markets, etc. To borrow a concept from political science, technical innovation functions like a sort of ratchet effect. Scale and complexity are always torqued up, never released or diminished. And this makes it hard to understand this pervasive thing that we call technology.

For some time, through the early to mid-twentieth century we outsourced this sort of understanding to the expert and managerial class. The post-war period witnessed a loss of confidence in the experts and managers, hence it yielded the heightened anxiety about technology that Winner registers in the ’70s. Three decades later, we are still waiting for new and better forms of understanding.

______________________________________

[1] I’m borrowing the bulk of this paragraph from an earlier post.

[2] Here again I found an echo in Winner of some of what I had also concluded.

[3] We suffer not only from a poverty of concepts, but also, I would add, from a poverty of narratives that might frame our concepts. See the end of this post.


Tip the Writer

$1.00

Promethean Shocks

Consider the image below. It was created in 1952 by Alexander Leydenfrost for the 50th anniversary issue of Popular Mechanics.

Alexander Leydenfrost - March of Science 3WifC

I thought of this image as I read Thomas Misa’s brief discussion of the wide-spread perception that the pace of technological change is ever-quickening. “At least since Alvin Toffler’s best-selling Future Shock (1970),” Misa writes, “pundits perennially declare that the pace of technology is somehow quickening and that technology is forcing cultural changes in its wake, that our plunge into the future is driven by technology gone out of control.”

Misa, a historian of technology, is not altogether certain that the pace of technological change has in fact quickened. He is certainly opposed to the “crude technological determinism” inherent in the idea that technology is forcing cultural changes. He does, however, give merit to the experience that is often described using this language. He attributes the perception of quickening to “a split between our normative expectations for technology and what we observe in the world around us.” “It is not so much that our technologies are changing especially quickly,” he explains, “but that our sense of what is ‘normal,’ about technology and society, cannot keep pace.”

According to Misa, developments in cloning, biotechnology, surveillance technologies, and nanotechnology are outstripping regulatory laws and ethical norms. This seems true enough. We might even describe the condition of modernity (and/or post-modernity, or hyper-modernity, or reflexive modernity, or whatever) as one of perpetual Promethean shock. This way of putting it seems more apt than “future shock.” One way of telling the story of modernity, after all, is to order it as a series of startling realizations of what we suddenly acquired the power to do. As Auden wrote of the modern Mind,

“Though instruments at Its command
Make wish and counterwish come true,
It clearly cannot understand
What It can clearly do.”

Clearly certain technological innovations yield this dizzying experience of Promethean shock more than others. The advent of human flight, the atomic bomb, and putting a man on the moon are just some of the more simultaneously startling, awe-inspiring, disconcerting examples. When these technologies arrived, in rather quick succession, they rattled and unsettled reigning cultural and moral frameworks, tacit as they may have been, for incorporating new technologies into existing society.

These normative cultural expectations for technology were set, Misa suggests, during the “longer-duration ‘eras'” of technology which he identifies in his history of the relationship between technology and culture. These eras included, for example, the age of the Renaissance, when the demands of Renaissance court culture set the agenda for technological change, and the age of imperialism, when the imperatives of empire dictated the dominant agenda. The former era, in Misa’s analysis, spanned nearly two centuries, and the latter the better part of one; but the 20th century is home to at least four different eras, which Misa labels the eras of systems, modernism, war, and global culture respectively.

Misa is on to something here, but he seems to push the question back rather than answering it directly. Why then are these eras, tentative and suggestive as he acknowledges them to be, getting progressively shorter? He is closer to the mark, I think, when he also suggests that the sense of quickening is linked to another “quickening” in  the “self-awareness of societies,” which he further defines as “our capacities to recognize and comprehend change.”

His brief illustration of this development is initially compelling. Two centuries elapsed before the Renaissance was named. Within 50 years of the appearance of factories in Manchester, the phrase industrial revolution was in usage. Just four years after the telephone arrived in Moscow, Chekhov published a short story, “On the Telephone,” about the travails of an early adopter trying to make a restaurant reservation over the phone (fodder for a Seinfeld plot, if you ask me). Most recently, William Gibson gives us the term cyberspace “almost before the fact of cyberspace.” This sequences suggests to Misa that Western society has become increasingly self-aware of technological change and its consequences.

I think this claim has merit. The Modern mind that, according to Auden “cannot understand/what it can clearly do” he described as the “self-observed, observing Mind.”  But, again, we might be tempted to ask why our “self-awareness” is accelerating in this way. Might it not be attributed, at least in part, to an increase in the rapidity of technological development? The two seem inseparable. I’m reminded of Walter Ong’s dictum about writing: “Writing heightens consciousness.” In a slightly different way, might we not argue that technological change heightens societal self-awareness?

Consider again that Leydenfrost image above. It captures an important aspect of how we’ve come to understand our world: we have aligned our reckoning of the passage of time with the development of technology. We have technologies that mark time, but in another sense the advent of new technologies mark time by their appearance, iteration, and obsolescence.

Human beings have long sought markers to organize the experience of time, of course. For a day sunrise and sunset served just fine. The seasons, too, helped order the experience of a year. For longer periods, however, cultural rather than natural markers were needed. Consider, for instance, the common practice in the ancient world of reckoning time by the reigns of monarchs. “In the x year of so-and-so’s reign” is a recurring temporal formula.

Without achieving that sort of verbal formality or precision, I’d suggest that the development of technology–the reign, if you will, of certain technologies and artifacts–now does similar work for modern societies. Records, 8-tracks, tapes, CDs, MP3s. Desktops, laptops, tablets. Landline, portable landline, cellphone, smartphone. Black and white TV, color TV, projection TV, flatscreen TV. Pre-Internet/post-Internet. Dial-up/broadband/wireless. I suspect you can supply similar artifactual chronologies that have structured our recollection of the past. We seem to have synchronized our perception and experience of time to the cycles of technological innovation.

The Leydenfrost image also reminds us that insofar as the notion of progress exists at all today, it is clearly bound up with the advance of technology. All other forms of progress that we might imagine  or aspire to– moral, economic, social–these are subsumed under the notion of technological progress. For that reason, rumors or suggestions that technological innovation might be slowing down unnerve us. We need the next big thing  to keep coming on schedule, however trivial that next big thing might be, to distract us from our economic, political, and personal woes.

This was illustrated nicely by the minor tech-media freakout occasioned recently by a Christopher Mims’ piece at Quartz arguing that 2013 was a lost year for tech. It was received as a heretical claim, and it was promptly and roundly condemned. “I, too, constantly yearn for mind-blowing new tech,” Farhad Manjoo tells us in his reply to Mims before re-assuring us: “I think we’re witnessing the dawn of a new paradigm in machine-human cooperation: Combining machine intelligence with biological intelligence will always trump one or the other. Machines make us better, and we make machines better. There’s still hope for us. Welcome to the bionic future.”

The world may be falling apart around us, but we can bear it so long as we can project our hopes on the amorphous promise of technological advance. The prospect of a host of Promethean shocks that we seemed poised to receive–from drones, robotics, AI, bioengineer, geo-engineering, nanotechnology–makes us nervous, they unsettle our moral frameworks; but their absence would worry us more, I suspect.

 

Flame Wars, Punctuation Cartoons, and Net-heads: The Internet in 1988

I remember having, in the late 1980s, an old PC with a monochrome monitor, amber on black, that was hooked up to the Internet through Prodigy. I was on the Internet, but I had no idea what the Internet was. All I knew was that I could now get near-realtime updates on the score of Mets games.

Back in November, the Washington Post’s tech page posted the text of an article from the newspaper covering the Internet in 1988, right about the time I was messing around on Prodigy:  “Here’s how The Post covered the ‘grand social experiment’ of the Internet in 1988.” It’s a fascinating portal into what feels like another world, sort of. I usually think of the modern Internet experience first taking shape with the World Wide Web in the early and mid-90s. Reading this article, I was struck by how early many of the contours of the web as we know it, and the language we use to describe, began to appear. To be sure, there are also striking discontinuities, but the Internet in 1988 — before AOL, WWW, the dot-com bubble, Web 2.0, social, mobile, and all that — exhibited characteristics that will be readily recognizable. 

Consider the early example of “crowd-sourcing” or collective intelligence with which the author opens the article:

Kreisel was looking for an efficient way to paint patterns inside computer-drawn shapes. Paul Heckbert, a graduate student in California, did not know Kreisel, but he had a pretty good piece of code, or computer programming, to do the job. He dispatched it to Kreisel’s machine, and seconds later the New Yorker’s problem was solved.

Of course, the snake was already in the garden. The article is, in fact, occasioned by “a rogue program, known as a ‘virus,'” designed by a student at Cornell that “did not destroy any data but clogged computers and wasted millions of dollars’ worth of skilled labor and computer time.”

The virus, we’re told, “frightens many network visionaries, who dream of a ‘worldnet’ with ever more extensive connections and ever fewer barriers to the exchange of knowledge.” (Cyber-utopianism? Check. Of course, the roots of cyber-utopianism go back further still than 1988.) According to a Harvard astrophysicist, the Internet is a “community far more than a network of computers and cables.” “When your neighbors become paranoid of one another,” he added, “they no longer cooperate, they no longer share things with each other. It takes only a very, very few vandals to … destroy the trust that glues our community together.”

The scale clearly appeared massive, but today it seems quaint: “Together the news groups produce about 4 million characters of new material a day, the equivalent of about five average books.” But don’t worry about trying to keep up with it all, “90 percent of it is complete and utter trash,” at least as far as that astrophysicist was concerned. Hard to imagine that he was far off the mark (or that the ratio has shifted too much in the ensuing years).

At the time, “thousands of men and women in 17 countries swap recipes and woodworking tips, debate politics, religion and antique cars, form friendships and even fall in love.” Honestly, that’s not a bad sample of the sorts of things we’re still doing on the Internet. In part, this is because it is not a bad sample of what human beings do generally, so it’s what we end up doing in whatever social spaces we end up creating with our tools.

And when human beings find themselves interacting in new contexts created by new communication technologies, there are bound to be what we might kindly call “issues” while norms and conventions adapt to account for the new techno-social configuration. So already in 1988, we read that the Internet “has evolved its own language, social norms and “netiquette.'” More than twenty years hence, that project is still ongoing.

The author focused chiefly on the tone of discourse on Internet forums and, rightly I think, attributed its volatility to the characteristics of the medium:

Wouk’s riposte is a good example of why arguments sometimes intensify into bitter feuds known as flame wars, after the tendency of one character in Marvel Comics’ Fantastic Four to burst into flames (“Flame on!” he shouts) when he is angry. Is Wouk truly angry or just having a good time? Because the written word conveys no tone of voice, it isn’t always easy to tell.

“In a normal social setting,” said Chuq von Rospach of Sun Microsystems, “chances are the two of them would have a giggle over it and it would go away. On a network it tends to escalate. The feedback mechanisms that tell you to back off, or tell you that this person’s joking, aren’t there. The words are basically lifeless.”

Not only would I have failed to guess that flame wars was already in use in 1988, I had no idea of its etymology. But the author doesn’t use emoticon when describing their use:  “True net-heads sometimes resort to punctuation cartoons to get around the absence of inflection. They may append a :-) if they are making a joke (turn your head to the left) or use :-( for an ersatz frown.”

“Net-heads” appears not to have caught on, but what the author calls “punctuation cartoons” certainly have. (What we now call emoticons, by the way, first appeared in the late 19th century.)

Finally, lest the Internet of 1988 appear all to familiar, here’s one glaring point of discontinuity on which to close: “The one unbending rule is that thou shalt not post commercial announcements. It isn’t written anywhere, but heaven help the user who tries to broadcast an advertisement.”

And for good measure, here’s another snippet of the not-too-distant past (1994) that nonetheless manages to feel as if it were ages ago. “Allison, can you explain what Internet is?”

http://www.youtube.com/watch?v=JUs7iG1mNjI

Your Click Will Have Come From the Heart

wyd-rio-2013

For the first time since 1517, when Martin Luther kicked off the Protestant Reformation, indulgences are in the news. As the start of the Roman Catholic Church’s 28th annual World Youth Day in Rio de Janeiro drew near, it was widely reported that Pope Francis had decreed the granting of special indulgences for those who took part in the event. A decree of this sort would not ordinarily garner any media attention, but this particular decree contained an unprecedented measure. Indulgences were offered even to those who followed the event on social media.

Perhaps a little background is in order here. In Roman Catholic theology, Purgatory is where those who are ultimately on their way to Heaven go to satisfy all of the temporal punishment they’ve got coming and otherwise get spiritually prepared to enter Heaven. This is where indulgences come in.

The Church, as a conduit of God’s grace, may issue indulgences to reduce one’s time in Purgatory. Indulgences are premised on the notion that while most of us end up in the red when the moral account of our lives is taken, saints, martyrs, the Virgin, etc. ended up in the black. Their abundance of moral virtue constitutes a Treasury of Merit on which the Church can draw in order to help out those who are still paying up in Purgatory. Indulgences, then, are not, as they are sometimes portrayed in the press, a “get out of Hell free” card. They are more like a fast-pass through Purgatory.

That early modern religious, political, and cultural revolution we call the Protestant Reformation was kicked off when a German monk named Martin Luther posted 95 Theses for disputation on the door of a church in Wittenberg. Luther took aim at the practice of selling indulgences to fill the Church’s more literal treasury. He was particularly scandalized by the abuses of the notoriously unscrupulous Johann Tetzel. Think of Tetzel as a late medieval version of the worst stereotype of a contemporary televangelist. Luther set out to shut Tetzel and his ilk down, and the rest, as they say, is history.

While the practice of selling indulgences was banned by the Church shortly after Luther’s day, the Catholic Church still offers indulgences for particular works of piety. And that brings us back to Francis’ offer of indulgences to those who participate in World Youth Day celebrations. Here is the portion of the decree that has made a story out of these indulgences:

Those faithful who are legitimately prevented may obtain the Plenary Indulgence as long as, having fulfilled the usual conditions — spiritual, sacramental and of prayer — with the intention of filial submission to the Roman Pontiff, they participate in spirit in the sacred functions on the specific days, and as long as they follow these same rites and devotional practices via television and radio or, always with the proper devotion, through the new means of social communication; …

The “new means of social communication” have been widely reduced to Twitter in accounts of this story. Following the Pope’s tweets (@Pontifex) during the week is one way of virtually participating in the event. The faithful may also watch live streaming video through web-portals set up by the Vatican or keep up on Facebook and Pinterest.

Claudio Maria Celli, president of the pontifical council for social communications, was quick to clarify the intent of the decree. “Get it out of your heads straight away,” Celli explained to the media, “that this is in any way mechanical, that you just need to click on the internet in a few days’ time to get a plenary indulgence.”

As the text of the decree makes clear, the “usual conditions” apply. Believers must be properly motivated and they must see to the ordinary means of grace offered by the Church: confession, penance, and prayer. And there’s also that line, almost entirely neglected in media reports, about being “legitimately prevented” from attending. But Celli seemed particularly determined to prevent any misunderstanding on account of the inclusion of digital media:

You don’t get the indulgence the way you get a coffee from a vending machine. There’s no counter handing out certificates. To put it another way, it won’t be sufficient to attend the mass in Rio online, follow the Pope on your iPad or visit Pope2You.net. These are only tools that are available to believers. What really matters is that the Pope’s tweets from Brazil, or the photos of World Youth Day that will be posted on Pinterest, should bear authentic spiritual fruit in the hearts of each one of us.

Protestants are usually taken to be more technologically savvy than the Catholic Church. After all, while the Catholic Church was weighing the moral hazards of the printing press, Protestants took to it enthusiastically and used it to spread their message across Europe. It is also true that American evangelicals have been especially keen on appropriating new media to spread the good news. As Henry Jenkins, a scholar of new media, put it,

Evangelical Christians have been key innovators in their use of emerging media technologies, tapping every available channel in their effort to spread the Gospel around the world. I often tell students that the history of new media has been shaped again and again by four key innovative groups – evangelists, pornographers, advertisers, and politicians, each of whom is constantly looking for new ways to interface with their public.

But this way of telling the story, centered as it is on print and the communication of a message, may be obscuring the fuller account of the Catholic Church’s relationship to media technology. It may be, in fact, that the Catholic Church is well-positioned, given its particular forms of spirituality, to flourish in an age of digital media.

Making that case convincingly is a book-length project, but, allowing for some rough-and-ready generalizations, here’s the shorter version. It starts with recognizing that Francis’ decision to extend indulgences to those who are not physically present in Rio is not without precedent. This strikingly 21st century development has a medieval antecedent.

If you’ve ever visited a Roman Catholic Church, you’ve likely seen the Way of the Cross (sometimes called the Stations of the Cross): a series of images depicting scenes from the final hours of Jesus’ life. There are fourteen stations including, for example, his condemnation before Pilate, the laying of the cross on his shoulders, three falls on the way to the site of the crucifixion, and culminating with his body being laid in the tomb. Catholics may earn indulgences by prayerfully traversing the Way of the Cross.

What makes this a precedent to Francis’ decree is that the Way of the Cross represented by images – carved, painted, sculptured, engraved, etc. – in local churches was a way of making a virtual pilgrimage to these same places in Jerusalem. Since at least the fourth century, Christians prized a visit to the Holy Land, but, as you can easily imagine, that journey was not cheap or comfortable, nor was it entirely safe. By the late medieval period, the Way of the Cross was appearing in churches across Europe as a concession to those unable to travel to Jerusalem.

While the precise history of the Way of the Cross is a bit murky, it is clear that the regularizing of the virtual pilgrimage transpired under the auspices of the Franciscan Order. This is not entirely surprising given the Franciscan Order’s traditional care for the poor, those least likely to make the physical pilgrimage. It also makes it rather fitting that it is Pope Francis – who took his papal title from St. Francis of Assisi, the founder of the Franciscan Order – who first legitimized social media as a vehicle of indulgences.

What’s interesting about this connection is that Catholic theology had grappled with the notion of virtual presence long before the advent of electronic or digital media. And this leads to one of those egregious generalizations: Protestant piety was wedded to words (printed words particularly) and while Catholic piety was historically comfortable with a wider range of media (images especially).

This means that Protestants have understood media primarily as a means of communicating information, and Catholics, while obviously aware of media as a means of communicating information, have also (tacitly perhaps) understood media as a means of communicating presence. Along with the Way of the Cross, consider the prominence of images, icons, and statues of Jesus, Mary, and the saints in Catholic piety. These images, whatever shape they take, are reverenced in as much as they mediate the presence of the “prototypes” which they represent.

Without claiming that the printed book created Protestantism, we could argue that in a media environment dominated by print, Protestant forms of piety enjoy a heightened plausibility. Print privileges message over presence. But high bandwidth digital media, combining audio and image, have become more than conduits of information, they increasingly channel presence and may thus be more hospitable to Catholic spirituality. Conversely, digital media may present intriguing challenges to traditional forms of Protestant spirituality.

It’s not surprising, then, that Pope Francis has seen fit to sanction digital media as legitimate tools of spirituality for Catholics. As Fr Paolo Padrini, a Catholic scholar of new media, put it, digital media allow for “Sharing, acting in unison, despite the obstacle of distance. But it will still be real participation and that is why you will obtain the indulgence. Above all because your click will have come from the heart.”

Thinking and Its Rhetorical Enemies

In one short post, Alan Jacobs targeted several Borg Complex symptoms. The post was triggered by his frustration with an xkcd comic which simply strung together a series of concerns about technological developments expressed in the late 19th and early 20th century. The implicit message was abundantly clear: “Wasn’t that silly and misguided? Of course it was. Now stop your complaining about technology today.”

Jacobs raised four salient points in response:

1. “Why do we just assume that their concerns were senseless?”

2. While we may endorse the trade-offs new technologies entail, “it would be ridiculous to say that no trade has been made.”

3. “Moreover, even if people were wrong to fear certain technologies in the past, that says absolutely nothing about whether people who fear certain other technologies today are right or wrong.”

4. This sort of thing presents “an easy excuse not to think about things that need to be thought about.”

Exactly right on all counts.

In partial response to Jacobs’ question, I’d suggest that when living memory of a lost state of affairs also perishes, so to does the existential force of the loss and its plausibility. What we know is that life went on – here we are after all – and that seems to be the only bright line of consequence. All that is established by this, of course, is that we eventually acclimated to the new state of affairs. That we eventually get used to a state of affairs tells us nothing about its quality or desirability, nor that of the state of affairs that was displaced. To assume that it does is a future-tense extension of the naturalistic fallacy: simply because something comes to be the case, it does not follow that it ought to be the case.

The second point above recalls Neil Postman’s discussion of (yes, you guessed it) Phaedrus, Plato’s famous dialog in which Socrates tells the story of Thamus and Theuth. The god Theuth presents Thamus, king of Egypt, with a number of inventions including writing. Theuth is understandably excited about his creations, but Thamus is less sanguine. He warns that writing, among other things, will destroy memory. Learning to cite this story and dismiss it scornfully must be the first thing they teach you in tech-punditry school. But, as Jacobs points out, Thamus was not wrong. Here is Postman’s take:

“[Thamus’ error] is not in his claim that writing will damage memory and create false wisdom. It is demonstrable that writing has had such an effect. Thamus’ error is in his believing that writing will be a burden to society and nothing but a burden. For all his wisdom, he fails to imagine what writing’s benefits might be, which, as we know, have been considerable.”

“Every technology,” Postman goes on to say, “is both a burden and a blessing; not either-or, but this-and-that.” Those who see only blessing Postman labels “zealous Zeuths, one-eyed prophets who see only what new technologies can do and are incapable of imagining what they will undo.” Postman grants, of course, that there are also one-eyed prophets who speak only of the burdens of technology. It is best then to open both eyes.

Jacobs’ third point reminds us that the one-eyed prophets of technological blessing, those who dismiss the silly fears of previous generations, take Chicken Little as their normative story: the sky never, ever falls. As I’ve written before, the tale of the boy who cried wolf serves better. Even if earlier alarms proved false, it does not follow that the wolf never comes.

Finally, it is the fourth point that bears reiterating most emphatically. We need to think more, not less. It is that simple. There are many problems with Borg Complex rhetoric; that it undermines thinking and judgement may be the most disturbing and damaging.