A Reply to Adam Thierer

By most measures, this blog enjoys a modest readership. However, if I may be allowed the boast, I’d venture to claim that its readers were among the most thoughtful, irenic, and articulate commenters you’d be likely to find in the Hobbesian world of Internet comment boxes.

Adam Thierer of Mercatus Center at George Mason University has long been one of these thoughtful interlocutors. If you take a look at the comments on my last post or jump over to his post, you’ll see Adam’s reply to my list of suggestions for tech-writers. It’s worth reading, and what follows will make more sense if you take the time to do so.

__________________________________________________

Adam,

As always, thanks for your thoughtful and irenic response. Reading over your comment, it seemed to me that we are in broad agreement, at least as to the importance of asking the right questions.

You are, for instance, exactly right to note that most of these recommendations, including 2 and 10, “are born of a certain frustration with the tenor of much modern technology writing; the sort of Pollyanna-ish writing that too casually dismisses legitimate concerns about the technological disruptions and usually ends with the insulting phrase, ‘just get over it.’”

That’s a pretty good summary of the posture of critics and writers that I was targeting with this list. Happily, while this posture may characterize too much tech writing, it doesn’t describe the whole of it!

The best critics will, in my estimation, do precisely what you recommend; they will pursue “inquiry into the nature of individual and societal acclimation to technological change.” The research program that emerges from the series of questions you listed would be tremendously important and useful.

My knowledge of such things is certainly not exhaustive, but I’m having trouble thinking of a title that explicitly and comprehensively addresses the mechanisms of cultural adaptation to new technologies. Of course, a good deal of preliminary work has been done in the form of the many studies by historians of technology of particular technologies and how they were received. I’m thinking of a few classics such as Merritt Roe Smith’s Harpers Ferry Armory and the New Technology: The Challenge of Change, Joseph Corn’s The Winged Gospel: America’s Romance with Aviation, 1900-1950, and Claude Fischer’s America Calling: A Social History of the Telephone to 1940.

What is lacking, so far as I know, is a work that draws on such granular studies in order to develop a more comprehensive theory of technological change and assimilation.

Of course, as you write, it is one thing to describe the processes of adaptation or the mechanisms of assimilation (and resistance!), and it is another thing to then make normative judgments about the axial status (for lack of a better, sufficiently inclusive way of putting it) of those adaptations and assimilations.

Such metrics eventually involve us in sorting out the various sources of our political, economic, ethical, and even religious assumptions. This was part of my point when I wrote about what the tech critic loves a couple of years ago. Our criticisms and evaluations are animated by our fundamental commitments. Thus, as you suggested, it may be that arriving at a consensus about the standards of evaluation would prove elusive. Actually, I’m almost certain that it would prove elusive. But I’m in favor of foregrounding that lack of consensus and our tacit evaluative frameworks; failure to do so leads to unfruitful exchanges among people who do little more than talk past each other while failing to understand why others don’t see what is obvious to them.

All of that to say that I would not object to your proposed addendum to number 10: “But how people and institutions learned to cope with those concerns is worthy of serious investigation. And what we learned from living through that process may be valuable in its own right.”

My only hesitation on that score, I would put this way: There are some forms of experiential knowledge that, while ultimately valuable, I would not voluntarily choose to gain, nor wish on others.

I think, for example, of the kind of self-knowledge that we might gain through the experience of tragedy. There may very well be personal knowledge, enhanced vision, greater strength, etc. to be had, but I would not choose tragedy for myself nor place others in a situation in which they were forced to learn such things whether they wanted to or not.

Technological innovation is important and it can be valuable and beneficial. (I say “can be” advisedly. I think it is a mistake to assume that innovation is in itself an unalloyed good.) Innovation entails risk, of course, and a life driven solely by the avoidance of risk is not a healthy life. Following Huxley, I’ve made that point a number of times on this blog. That said, there is a difference between the voluntary assumption of risk, and a involuntary imposition of risk on others, particularly when the negative fallout would disproportionately come to those upon whom risk was imposed and who stood to benefit the least from the potentially positive outcomes. This is part of the ethical challenge as I see it. The nature of our technologies (connected, global, networked, etc.) are such that risk may be unjustly distributed. There may be no easy practical solution to this, but we should at least be prepared to speak frankly about the nature of the situation rather than glossing over such things with cliches and slogans. (I do not mean to suggest that you are guilty of this.)

This is, I realize, a tremendously thorny and complex field to navigate wisely. I suspect that in your latest work you addressed some these very issues, and I’m hoping to read what you have to say about it as soon as my schedule lightens up. Also, I’ve not read Garreau’s work, but it looks as if that should also go on the ever-expanding, never-diminishing “to-read” list.

I’ll wrap up by commending your optimism. While the list that kicked off this exchange was focused on the characteristic errors of the tech-utopians, a similar list might’ve been put together to challenge the tech-dystopians. As I’ve admitted before, I like to think that I occupy the pragmatic middle that you identified in your schema of tech optimism and pessimism, but perhaps leaning toward the pessimistic side of the ledger. But pessimism is not the point, of course; much less is despair. I’m not sure that optimism is quite right either, though. Returning to the philosophical/moral/religious frameworks at play in our thinking about such things, I tend toward the language of hope. Such hope, however, does not preclude the possibility of much penultimate injustice and disorder that we should work to mitigate and set right.

Again, many thanks for the response!

 

Leave a comment