… then you know that I have three series of posts in progress right now. Two relate to the “Internet of Things,” automation, and technological enchantment; the third deals with the religious/cultural matrix of technological innovation. As it happens, though, each of these will be on hold for the next week or so during which I’ll be away and without Internet connection–on a digital sabbath of sorts, although more by circumstance than by design. In any case, the blog will be silent for the next several days.
What has Silicon Valley to do with Jerusalem?
More than you might think, but that question, of course, is a riff on Tertullian’s famous query, “What has Athens to do with Jerusalem?” It was a rhetorical question. By it, Tertullian implied that Christian theology, represented by Jerusalem, should steer clear of Greek philosophy, represented by Athens. I offer my question, in which Silicon Valley represents technological “innovation,” more straightforwardly and as a way of introducing this second post in conversation with Pascal-Emmanuel Gobry’s essay, “Peter Thiel and the Cathedral.”
In the first post, I raised some questions about terminology and the force of Gobry’s analogy: “The monastics were nothing if not innovators, and the [monastic] orders were the great startups of the day.” I was glad to get some feedback from Gobry, and you can read it here; you can also read my response below Gobry’s comment. Of course, Internet reading being what it is, it’s probably better if I just give you the gist of it. Gobry thought I made a bit too much of the definitional nuances while also making clear that he was well aware of the distinctions between a twenty-first century start up and a thirteenth century monastery.
For the record, I never doubted Gobry’s awareness of the fine points at issue. But when the fine points are relevant to the conversation, I think it best to bring them to the surface. It matters, though, what point is being made, and this may be where my response to Gobry’s essay missed the mark, or where Gobry and I might be in danger of talking past one another. The essay reads a bit like a manifesto, it is a call to action. Indeed, it explicitly ends as such. Given that rhetorical context, my approach may not have been entirely fair. In fact, it may be better to frame most of what I plan to write as being “inspired” by Gobry’s post, rather than as a response to it.
It would depend, I think, on the function of the historical analogies, and I’ll let Gobry clarify that for me. As I mentioned in my reply to his comment, it matters what function the historical analogies–e.g., monasteries as start-ups–are intended to play. Are they merely inspirational illustrations, or are they intended as morally compelling arguments. My initial response assumed the latter, thus my concern to clarify terminology and surface the nuance before moving on to a more formal evaluation of the claim.
The closing paragraphs of Gobry’s response to my post, however, suggested to me that I’d misread the import of the analogies. Twice Gobry clarified his interest in the comparisons:
“What interests me in the analogy between a startup and a monastic foundation is the element of risk and folly in pursuit of a specific goal,”
“What interests me in the analogy between monastic orders and startups is the distinct sense of mission, a mission which is accomplished through the daring, proficiency and determination of a small band of people, and through concrete ends.”
That sounds a bit more like an inspirational historical illustration than it does an argument by analogy based on the assumed moral force of historical precedent. Of course, that’s not a criticism. (Although, I’m not sure it’s such a great illustration for the same reasons I didn’t think it made a convincing argument.) It just means that I needed to recalibrate my own approach and that it might be best to untether these considerations a bit from Gobry’s post. Before doing so, I would just add this. If the crux of the analogy is the element of risk and folly in pursuit of a goal and a sense of mission executed by a devoted community, then the monastic tradition is just one of many possible religious and non-religious illustrations.
Fundamentally, though, even while Gobry and I approach it from different angles, I still do think we are both interested in the same issue: the religious/cultural matrix of technological innovation.
In Gobry’s view, we need to recover the innovative spirit illustrated within the monastic tradition and also by the building of the great medieval cathedrals. In a subsequent post, I’ll argue that a closer look at both helps us to see how the relationship between technology and culture has evolved in such a way that the strength of cultural institutions that ought to drive “innovation” has been sapped. In this light, Gobry’s plea for the church to take the up the mantle of innovation might be understood as a symptom of what has gone wrong with respect to technology’s relationship to religion, and culture more broadly. In short, the problem is that technological innovation is no longer a means directed by the church or some other cultural institution to some noble end, it is too frequently pursued as an end in itself. For the record, I don’t think this is what Gobry himself is advocating.
Gobry is right to raise questions about the relationship between technological innovation and, to borrow Lynne White’s phrasing, cultural climates. White himself argued that there was something about the cultural climate of medieval Europe that proved hospitable to technological innovation. But looking over the evolution of technology and culture over the subsequent centuries, it becomes apparent that the relationship between technology and culture has become disordered. In the next post, I’ll start with the medieval cathedrals to fill out that claim.
Last week I read a spirited essay by Pascal-Emmanuel Gobry titled “Peter Thiel and the Cathedral.” Gobry’s post was itself inspired by a discussion of technology, politics, and theology between Thiel, the founder of PayPal, and theologian N.T. Wright, formerly bishop of Durham. That discussion was moderated by NY Times columnist Ross Douthat. As for Gobry, he is a French entrepreneur and writer currently working for Forbes. Additionally, Gobry and Douthat are both Roman Catholics. Wright is a minister in the Church of England. Thiel’s religious views are less clear; he identifies as a Christian with “somewhat heterodox” beliefs.
So, needless to say, I found this mix of themes and personalities more than a little interesting. In fact, I’ve been thinking of Gobry’s post for several days. The issues it raised, in their broadest form, include the relationship between technology and culture as well as the relationship between Christianity and technology. Of course, these issues can hardly be addressed adequately in a blog post, or even a series of blog posts. While I thought about Gobry’s post and read related materials, relevant considerations cascaded. Nothing short of a book-length treatment could do this subject justice. That said, beginning with this post, I’m going to offer a few of considerations, briefly noted, that I think are worth further discussion.
In this post, I’ll start with a quick sketch of Gobry’s argument, and I’ll follow that with some questions about the key terms at play in this discussion. My goal is to read Gobry charitably and critically precisely because I share his sense that these are consequential matters, and not only for Christians.
Reduced to its essence, Gobry’s essay is a call for the Church to reclaim it’s role as a driving force of technological innovation for the good of civilization. The logic of his argument rests on the implications of the word reclaim. In his view, the Church, especially the medieval church, was a key player in the emergence of Western science and technology. Somewhere along the way, the Church lost its way and now finds itself an outsider to the technological project, more often than not a wary and critical outsider. Following Thiel, Gobry is worried the absence of a utopian vision animating technological innovation will result in technological stagnation with dire civilizational consequences.
With that sketch in place, and I trust it is a fair summary, let’s move on to some of the particulars, and we’ll need to start by clarifying terminology.
Church, Technology, Innovation—we could easily spend a lot of time specifying the sense of each of these key terms. Part of my unease with Gobry’s argument arises from the equivocal nature of these terms and how Gobry deploys them to analogize from the present to the past. I would assume that Gobry, as a Roman Catholic, primarily has the Roman Church in view when he talks about “the Church” or even Christianity. On one level this is fine, it’s the tradition out of which Gobry speaks, and, moreover, his blog is addressed primarily to a Catholic audience. My concern is that the generalization obscures non-trivial nuances. So, for instance, even the seemingly cohesive and monolithic world of medieval Catholicism was hardly so uniform on closer examination. Consequently, it would be hard to speak about a consistent and uniform attitude or posture toward “technology” that characterized “the Church” even in the thirteenth century. Things get even thornier when we realize that technology as it exists today was, like so much of modernity, funneled through the intellectual, economic, political, and religious revolution that was the Reformation.
But that is not all. As I’ve discussed numerous times before, defining “technology” is itself also a remarkably challenging task; the term ends up being a fiendishly expansive concept with fuzzy boundaries all around. This difficulty is compounded by the fact that in the medieval era there was no word that did the same semantic work as our word “technology.” It is not until the ninth century that the Carolingian theologian, John Scotus Erigena, first employed the term artes mechanicae, or the “mechanical arts,” which would function as the nearest equivalent for some time.
Finally, “innovation” is also, in my view, a problematic term. At the very least, I do not think we can use it univocally in both medieval and contemporary contexts. In our public discourse, innovation implies not only development in the “nuts and bolts” of technical apparatus; it also implies the conditions of the market economy and the culture of Silicon Valley. Whatever one makes of those two realities, it seems clear they render it difficult, if not impossible, to make historical generalizations about “innovation.”
So, my first major concern, is that speaking about the Church, technology, and innovation involves us in highly problematic generalizations. Generalizations are necessary, I understand this, especially within the constraints of short-form writing. I’m not pedantically opposed to generalizations in principle. However, every generalization, every concept, obscures particularities and nuances. Consequently, there is a tipping point at which a generalization not only simplifies, but also falsifies. My sense is that in Gobry’s post, we are very close to generalizations that falsify in such as way that they undermine the thrust of the argument. This is especially important because the historical analogies in this case are meant to carry a normative, or at least persuasive force.
Because the generalizations are problematic, the analogies are too. Consider the following lines from Gobry: “The monastics were nothing if not innovators, and the [monastic] orders were the great startups of the day. The technological and other accomplishments of the great monastic orders are simply staggering.”
As a matter of fact, the second sentence is absolutely correct. The analogies in the first sentence, however, are, in my view, misleading. The first clause is misleading because it suggests, as I read it, that “innovation” was of the essence of the monastic life. As Gobry knows, “monastic life” is already a generalization that obscures great variety on the question at issue, especially when eastern forms of monastic life are taken into consideration. But even if we concentrate on the more relevant strand of western and Benedictine monasticism, we run into trouble.
As George Ovitt found in his excellent work, The Restoration Of Perfection: Labor and Technology in Medieval Culture, technical considerations were consistently subordinated to spiritual ends. The monastics, were, in fact, much else even if they were at times innovators. This is evident in the Benedictine’s willingness to lay aside labor when it became possible to commission a lesser order of lay brothers or even paid laborers to perform the work necessitated by the community.
The second clause—“the [monastic] orders were the great start-ups of the day”—is misleading because it imports the economic conditions and motivations of the early twenty-first century to the medieval monasteries. Whatever we might say about the monasteries and their conflicted relationship to wealth—most monastic reform movements centered on this question—it seems unhelpful, if not irresponsible to characterize them as “start-ups.” The accumulation of wealth was incidental to the life of the monastery, and, historically, threatened its core mission. By contrast, the accumulation of wealth is a start-up’s raison d’être and shapes its life and work.
I hope these considerations do not come across as merely “academic” quibbles. I’ve no interest in being pedantic. In writing about technology and Christianity, Gobry has addressed a set of issues that I too consider important and consequential. Getting the relevant history right will help us better understand our present moment. In follow-up posts, I’ll take up some of the more substantive issues raised by Gobry’s essay, and I’ll follow his lead by using the construction of the cathedral’s as a useful case study.
As I follow the train of thought that took the dream of a smart home as a point of departure, I’ve come to a fork in the tracks. Down one path, I’ll continue thinking about the distinctions among Mechanization, Automation, and Animation. Down the other, I’ll pursue the technological enchantment thesis that arose incidentally in my mind as a way of either explaining or imaginatively characterizing the evolution of technology along those three stages.
Separating these two tracks is a pragmatic move. It’s easier for me at this juncture to consider them separately, particularly to weigh the merits of the latter. It may be that the two tracks will later converge, or it may be that one or both are dead ends. We’ll see. Right now I’ll get back to the three stages.
In his comment on my last post, Evan Selinger noted that my schema was Borgmannesque in its approach, and indeed it was. If you’ve been reading along for awhile, you know that I think highly of Albert Borgmann’s work. I’ve drawn on it a time or two of late. Borgmann looked for a pattern that might characterize the development of technology, and he came up with what he called the device paradigm. Succinctly put, the device paradigm described the tendency of machines to become simultaneously more commodious and more opaque, or, to put it another way, easier to use and harder to understand.
In my last post, I used heating as an example to walk through the distinctions among mechanization, automation, and animation. Borgmann also uses heating to illustrate the device paradigm: lighting and sustaining a fire is one thing, flipping a switch to turn on the furnace is another. Food and music also serve as recurring illustrations for Borgmann. Preparing a meal from scratch is one thing, popping a TV dinner in the microwave is another. Playing the piano is one thing, listening to an iPod is another. In each case a device made access to the end product–heat, food, music–easier, instantaneous, safer, more efficient. In each case, though, the workings of the device beneath the commodious surface became more complex and opaque. (Note that in the case of food preparation, both the microwave and the TV dinner are devices.) Ease of use also came at the expense of physical engagement, which, in Borgmann’s view, results in an impoverishment of experience and a rearrangement of the social world.
Keep that dynamic in mind as we move forward. The device paradigm does a good job, I think, of helping us think about the transition to mechanization and from mechanization to automation and animation, chiefly by asking us to consider what we’re sacrificing in exchange for the commodiousness offered to us.
Ultimately, we want to avoid the impulse to automate for automation’s sake. As Nick Carr, whose forthcoming book, The Glass Cage: Automation and Us, will be an excellent guide in these matters, recently put it, “What should be automated is not what can be automated but what should be automated.”
That principle came at the end of a short post reflecting on comments made by Google’s “Android guru,” Sundar Pichai. Pichai offered a glimpse at how Google envisions the future when he described how useful it would be if your car could sense that your child was now inside and automatically changed the music playlists accordingly. Here’s part of Carr’s response:
“With this offhand example, Pichai gives voice to Silicon Valley’s reigning assumption, which can be boiled down to this: Anything that can be automated should be automated. If it’s possible to program a computer to do something a person can do, then the computer should do it. That way, the person will be ‘freed up’ to do something ‘more valuable.’ Completely absent from this view is any sense of what it actually means to be a human being. Pichai doesn’t seem able to comprehend that the essence, and the joy, of parenting may actually lie in all the small, trivial gestures that parents make on behalf of or in concert with their kids — like picking out a song to play in the car. Intimacy is redefined as inefficiency.”
But how do we come to know what should be automated? I’m not sure there’s a short answer to that question, but it’s safe to say that we’re going to need to think carefully about what we do and why we do it. Again, this is why I think Hannah Arendt was ahead of her time when she undertook the intellectual project that resulted in The Human Condition and the unfinished The Life of the Mind. In the first she set out to understand our doing and in the second, our thinking. And all of this in light of the challenges presented by emerging technological systems.
One of the upshots of new technologies, if we accept the challenge, is that they lead us to look again at what we might have otherwise taken for granted or failed to notice altogether. New communication technologies encourage us to think again about the nature of human communication. New medical technologies encourage us to think again about the nature of health. New transportation technologies encourage us to think again about the nature of place. And so on.
I had originally used the word “forced” where I settled for the word “encourage” above. I changed the wording because, in fact, new technologies don’t force us to think again about the realms of life they impact. It is quite easy, too easy perhaps, not to think at all, simply to embrace and adopt the new technology without thinking at all about its consequences. Or, what amounts to the same thing, it is just as easy to reject new technologies out of hand because they are new. In neither case would we be thinking at all. If we accept the challenge to think again about the world as new technologies cast aspects of it in a new light, we might even begin to see this development as a great gift by leading us to value, appreciate, and even love what was before unnoticed.
Returning to the animation schema, we might make a start at thinking by simply asking ourselves what exactly is displaced at each transition. When it comes to mechanization, it seems fairly straightforward. Mechanization, as I’m defining it, ordinarily displaces physical labor.
Capturing what exactly is displaced when it comes to automation is a bit more challenging. In part, this is because the distinctions I’m making between mechanization and automation on the one hand and automation and animation on the other are admittedly fuzzy. In fact, all three are often simply grouped together under the category of automation. This is a simpler move, but I’m concerned that we might not get a good grasp of the complex ways in which technologies interact with human action if we don’t parse things a bit more finely.
So let’s start by suggesting that automation, the stage at which machines operate without the need for constant human input and direction, displaces attention. When something is automated, I can pay much less attention to it, or perhaps, no attention at all. We might also say that automation displaces will or volition. When a process is automated, I don’t have to will its action.
Finally, animation– the stage at which machines not only act without direct human intervention, but also “learn” and begin to “make decisions” for themselves–displaces agency and judgment.
By noting what is displaced we can then ask whether the displaced element was an essential or inessential aspect of the good or end sought by the means, and so we might begin to arrive at some more humane conclusions about what ought to be automated.
I’ll leave things there for now, but more will be forthcoming. Right now I’ll leave you with a couple of questions I’ll be thinking about.
Also, coming back to Arendt, she laid out two sets of three categories that overlap in interesting ways with the three stages as I’m thinking of them. In her discussion of human doing, she identifies labor, work, and action. In her discussion of human thinking, she identifies thought, will, and judgment. How can her theorizing of these categories help us understand what’s at stake in drive to automate and animate?
More than likely, you’ve recently heard about Facebook’s experiment in the manipulation of user emotions. I know, Facebook as a whole is an experiment in the manipulation of user emotions. Fair enough, but this was a more pointed experimented that involved the manipulation of what user’s see in their News Feeds. Here is how the article in the Proceedings of the National Academy of Sciences summarized the significance of the findings:
“We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”
Needless to say (well except to Facebook and the researchers involved), this massive psychological experiment raises all sorts of ethical questions and concerns. Here are some of the more helpful pieces I’ve found on the experiment and its implications:
- “Everything We Know About Facebook’s Secret Mood Manipulation Experiment” by Robinson Meyer
- “Social Media as Political Control: The Facebook Study, Acxiom, & NSA” by David Golumbia
- “The Empire Strikes Back” by Alan Jacobs (read the exchange in the comments too)
- “Facebook Does Mind Control” by Michael Gurstein
Update: Here are two more worth considering:
- “Data Science: What the Facebook Controversy is Really About” by Sara Watson
- “What Does the Facebook Experiment Teach Us?” by Danah Boyd
four six pieces should give you a good sense of the variety of issues involved in the whole situation, along with a host of links to other relevant material.
I don’t have too much to add except two quick observations. First, I was reminded, especially by Gurstein’s post, of Greg Ulmer’s characterization of the Internet as a “prothesis of the unconscious.” Ulmer means something like this: The Internet has become a repository of the countless ways that culture has imprinted itself upon us and shaped our identity. Prior to the advent of the Internet, most of those memories and experiences would be lost to us even while they may have continued to be a part of who we became. The Internet, however, allows us to access many, if not all, of these cultural traces bringing them to our conscious awareness and allowing us to think about them.
What Facebook’s experiment suggests rather strikingly is that such a prosthesis is, as we should have known, a two-way interface. It not only facilitates our extension into the world, it is also a means by which the world can take hold of us. As a prothesis of our unconscious, the Internet is not only an extension of our unconscious, it also permits the manipulation of the unconscious by external forces.
Secondly, I was reminded of Paul Virilio’s idea of the general or integral accident. Virilio has written extensively about technology, speed, and accidents. The accident is an expansive concept in his work. In his view, accidents are built-in to the nature of any new technology. As he has frequently put it, “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash … Every technology carries its own negativity, which is invented at the same time as technical progress.”
The general or integral accident is made possible by complex technological systems. The accident made possible by a nuclear reactor or air traffic is obviously of a greater scale than that made possible by the invention of the hammer. Complex technological systems create the possibility of cascading accidents of immense scale and destructiveness. Information technologies also introduce the possibility of integral accidents. Virilio’s most common examples of these information accidents include the flash crashes on stock exchanges induced by electronic trading.
All of this is to say that Facebook’s experiment gives us a glimpse of what shape the social media accident might take. An interviewer alludes to this line from Virilio: “the synchronizing of collective emotions that leads to the administration of fear.” I’ve not been able to track down the original context, but it struck me as suggestive in light of this experiment.
Oh, and lastly, Facebook COO Sheryl Sandberg issued an apology of sorts. I’ll let Nick Carr tell you about it.