Democracy and Technology

Alexis Madrigal has written a long and thoughtful piece on Facebook’s role in the last election. He calls the emergence of social media, Facebook especially, “the most significant shift in the technology of politics since the television.” Madrigal is pointed in his estimation of the situation as it now consequently stands.

Early on, describing the widespread (but not total) failure to understand the effect Facebook could have on an election, Madrigal writes, “The informational underpinnings of democracy have eroded, and no one has explained precisely how.”

Near the end of the piece, he concludes, “The point is that the very roots of the electoral system—the news people see, the events they think happened, the information they digest—had been destabilized.”

Madrigal’s piece brought to mind, not surprisingly, two important observations by Neil Postman that I’ve cited before.

My argument is limited to saying that a major new medium changes the structure of discourse; it does so by encouraging certain uses of the intellect, by favoring certain definitions of intelligence and wisdom, and by demanding a certain kind of content–in a phrase, by creating new forms of truth-telling.

Also:

Surrounding every technology are institutions whose organization–not to mention their reason for being–reflects the world-view promoted by the technology. Therefore, when an old technology is assaulted by a new one, institutions are threatened. When institutions are threatened, a culture finds itself in crisis.

In these two passages, I find the crux of Postman’s enduring insights, the insights, more generally, of the media ecology school of tech criticism. It seems to me that this is more or less where we are: a culture in crisis, as Madrigal’s comments suggest. Read what he has to say.

On Twitter, replying to a tweet from Christopher Mims endorsing Madrigal’s work, Zeynep Tufekci took issue with Madrigal’s framing. Madrigal, in fact, cited Tufekci as one of the few people who understood a good deal of what was happening and, indeed, saw it coming years ago. But Tufekci nonetheless challenged Madrigal’s point of departure, which is that the entirety of Facebook’s role caught nearly everyone by surprise and couldn’t have been foreseen.

Tufekci has done excellent work exploring the political consequences of Big Data, algorithms, etc. This 2014 article, for example, is superb. But in reading Tufekci’s complaint that her work and the work of many other academics was basically ignored, my first thought was that the similarly prescient work of technology critics has been more or less ignored for much longer. I’m thinking of Mumford, Jaspers, Ellul, Jonas, Grant, Winner, Mander, Postman and a host of others. They have been dismissed as too pessimistic, too gloomy, too conservative, too radical, too broad in their criticism and too narrow, as Luddites and reactionaries, etc. Yet here we are.

In a 1992 article about democracy and technology, Ellul wrote, “In my view, our Western political institutions are no longer in any sense democratic. We see the concept of democracy called into question by the manipulation of the media, the falsification of political discourse, and the establishment of a political class that, in all countries where it is found, simply negates democracy.”

Writing in the same special issue of the journal Philosophy and Technology edited by Langdon Winner, Albert Borgmann wrote, “Modern technology is the acknowledged ruler of the advanced industrial democracies. Its rule is not absolute. It rests on the complicity of its subjects, the citizens of the democracies. Emancipation from this complicity requires first of alI an explicit and shared consideration of the rule of technology.”

It is precisely such an “explicit and shared consideration of the rule of technology” that we have failed to seriously undertake. Again, Tufekci and her colleagues are hardly the first to have their warnings, measured, cogent, urgent as they may be, ignored.

Roger Berkowitz of the Hannah Arendt Center for Politics and the Humanities, recently drew attention to a commencement speech given by John F. Kennedy at Yale in 1962. Kennedy noted the many questions that America had faced throughout her history, from slavery to the New Deal. These were questions “on which the Nation was sharply and emotionally divided.” But now, Kennedy believed we were ready to move on:

Today these old sweeping issues very largely have disappeared. The central domestic issues of our time are more subtle and less simple. They relate not to basic clashes of philosophy or ideology but to ways and means of reaching common goals — to research for sophisticated solutions to complex and obstinate issues.

These issues were “administrative and executive” in nature. They were issues “for which technical answers, not political answers, must be provided,” Kennedy concluded. You should read the rest of Berkowitz reflections on the prejudices exposed by our current crisis, but I want to take Kennedy’s technocratic faith as a point of departure for some observations.

Kennedy’s faith in the technocratic management of society was just the latest iteration of modernity’s political project, the quest for a neutral and rational mode of politics for a pluralistic society.

I will put it this way: liberal democracy is a “machine” for the adjudication of political differences and conflicts, independently of any faith, creed, or otherwise substantive account of the human good.

It was machine-like in its promised objectivity and efficiency. But, of course, it would work only to the degree that it generated the subjects it required for its own operation. (A characteristic it shares with all machines.) Human beings have been, on this score, rather recalcitrant, much to the chagrin of the administrators of the machine.

Kennedy’s own hopes were just a renewed version of this vision, only they had become more explicitly a-political and technocratic in nature. It was not enough that citizens check certain aspects of their person at the door to the public sphere, now it would seem that citizens would do well to entrust the political order to experts, engineers, and technicians.

Leo Marx recounts an important part of this story, unfolding throughout the 19th to early 20th century, in an article accounting for what he calls “postmodern pessimism” about technology. Marx outlines how “the simple [small r] republican formula for generating progress by directing improved technical means to societal ends was imperceptibly transformed into a quite different technocratic commitment to improving ‘technology’ as the basis and the measure of — as all but constituting — the progress of society.” I would also include the emergence of bureaucratic and scientific management in the telling of this story.

Presently we are witnessing a further elaboration of this same project along the same trajectory. It is the rise of governance by algorithm, a further, apparent distancing of the human from the political. I say apparent because, of course, the human is never fully out of the picture, we just create more elaborate technical illusions to mask the irreducibly human element. We buy into these illusions, in part, because of the initial trajectory set for the liberal democratic order, that of machine-like objectivity, rationality, and efficiency. It is on this ideal that Western society staked its hopes for peace and prosperity. At every turn, when the human element, in its complexity and messiness, broke through the facade, we doubled-down on the ideal rather than question the premises. Initially, at least the idea was that the “machine” would facilitate the deliberation of citizens by establishing rules and procedures to govern their engagement. When it became apparent that this would no longer work, we explicitly turned to technique as the common frame by which we would proceed. Now that technique has failed because again the human manifested itself, we overtly turn to machines.

This new digital technocracy takes two, seemingly paradoxical paths. One of these paths is the increasing reliance on Big Data and computing power in the actual work of governing. The other, however, is the deployment of these same tools for the manipulation of the governed. It is darkly ironic that this latter deployment of digital technology is intended to agitate the very passions liberal democracy was initially advanced to suppress (at least according to the story liberal democracy tells about itself). It is as if, having given up on the possibility of reasonable political discourse and deliberation within a pluralistic society, those with the means to control the new apparatus of government have simply decided to manipulate those recalcitrant elements of human nature to their own ends.

It is this latter path that Madrigal and Tufekci have done their best to elucidate. However, my rambling contention here is that the full significance of our moment is only intelligible within a much broader account of the relationship between technology and democracy. It is also my contention that we will remain blind to the true nature of our situation so long as we are unwilling to submit our technology to the kind of searching critique Borgmann advocated and Ellul thought hardly possible. But we are likely too invested in the promise of technology and too deeply compromised in our habits and thinking to undertake such a critique.

Friday Links: Questioning Technology Edition

My previous post, which raised 41 questions about the ethics of technology, is turning out to be one of the most viewed on this site. That is, admittedly, faint praise, but I’m glad that it is because helping us to think about technology is why I write this blog. The post has also prompted a few valuable recommendations from readers, and I wanted to pass these along to you in case you missed them in the comments.

Matt Thomas reminded me of two earlier lists of questions we should be asking about our technologies. The first of these is Jacques Ellul’s list of 76 Reasonable Questions to Ask of Any Technology (update: see Doug Hill’s comment below about the authorship of this list.) The second is Neil Postman’s more concise list of Six Questions to Ask of New Technologies. Both are worth perusing.

Also, Chad Kohalyk passed along a link to Shannon Vallor’s module, An Introduction to Software Engineering Ethics.

Greg Lloyd provided some helpful links to the (frequently misunderstood) Amish approach to technology, including one to this IEEE article by Jameson Wetmore: “Amish Technology: Reinforcing Values and Building Communities” (PDF). In it, we read, “When deciding whether or not to allow a certain practice or technology, the Amish first ask whether it is compatible with their values?” What a radical idea, the rest of us should try it sometime! While we’re on the topic, I wrote about the Tech-Savvy Amish a couple of years ago.

I can’t remember who linked to it, but I also came across an excellent 1994 article in Ars Electronica that is composed entirely of questions about what we would today call a Smart Home, “How smart does your bed have to be, before you are afraid to go to sleep at night?”

And while we’re talking about lists, here’s a post on Kranzberg’s Six Laws of Technology and a list of 11 things I try to do, often with only marginal success, to achieve a healthy relationship with the Internet.

Enjoy these, and thanks again to those of you provided the links.

Technology Will Not Save Us

A day after writing about technology, culture, and innovation, I’ve come across two related pieces.

At Walter Mead’s blog, the novel use of optics to create a cloaking effect provided a springboard into a brief discussion of technological innovation. Here’s the gist of it:

“Today, Big Science is moving ahead faster than ever, and the opportunities for creative tinkerers and home inventors are greater than ever. but the technology we’ve got today is more dynamic than what people had in the 19th and early 20th centuries. IT makes it possible to invent new services and not just new gadgets, though smarter gadgets are also part of the picture.

Unleashing the creativity of a new generation of inventors may be the single most important educational and policy task before us today.”

And …

“Technology in today’s world has run way ahead of our ability to exploit its riches to enhance our daily lives. That’s OK, and there’s nothing wrong with more technological progress. But in the meantime, we need to think much harder about how we can cultivate and reward the kind of innovative engineering that can harness the vast potential of the tech riches around us to lift our society and ultimately the world to the next stage of human social development.”

Then in this weekend’s WSJ, Walter Isaacson has a feature essay titled, “Where Innovation Comes From.” The essay is in part a consideration of the life of Alan Turing and his approach to AI. Isaacson’s point, briefly stated, is that, in the future, innovation will not come from so-called intelligent machines. Rather, in Isaacson’s view, innovation will come from the coupling of human intelligence and machine intelligence, each of them possessed of unique powers. Here is a representative paragraph:

“Perhaps the latest round of reports about neural-network breakthroughs does in fact mean that, in 20 years, there will be machines that think like humans. But there is another possibility, the one that Ada Lovelace envisioned: that the combined talents of humans and computers, when working together in partnership and symbiosis, will indefinitely be more creative than any computer working alone.”

I offer these two you for your consideration. As I read them, I thought again about what I had posted yesterday. Since the post was more or less stream of consciousness, thinking by writing as it were, I realized that an important qualification remained implicit. I am not qualified to speak about technological innovation from the perspective of the technologist or the entrepreneur. Quite frankly, I’m not sure I’m qualified to speak about technological innovation from any vantage point. Perhaps it is simply better to say that my interests in technological innovation are historical, sociological, and ethical.

For what it is worth, then, what I was after in my previous post was something like the cultural sources of technological innovation. Assuming that technological innovation does not unfold in a value-neutral vacuum, then what cultural forces shape technological innovation? Many, of course, but perhaps we might first say that while technological innovation is certainly driven by cultural forces, these cultural forces are not the only relevant factor. Those older philosophers of technology who focused on what we might, following Aristotle, call the formal and material causes of technological development were not altogether misguided. The material nature of technology imposes certain limits upon the shape of innovation. From this angle, perhaps it is the case that if innovation has stalled, as Peter Thiel among others worry, it is because all of the low-hanging fruit has been plucked.

When we consider the efficient and final causes of technological innovation, however, we enter the complex and messy realm human desires and cultural dynamics. It is in this realm that the meaning of technology and the direction of its unfolding is shaped. (As an aside, we might usefully frame the perennial debate between the technological determinists and the social constructivists as a failure to hold together and integrate Aristotle’s four causes into our understanding of technology.) It is this cultural matrix of technological innovation that most interests me, and it was at this murky target that my previous post was aimed.

Picking up on the parenthetical comment above, one other way of framing the problem of technological determinism is by understanding it as type of self-fulfilling prophecy. Or, perhaps it is better to put it this way: What we call technological determinism, the view that technology drives history, is not itself a necessary characteristic of technology. Rather, technological determinism is the product of cultural capitulation. It is a symptom of social fragmentation.

Allow me to borrow from what I’ve written in another context to expand on this point via a discussion of the work of Jacques Ellul.

Ellul defined technique (la technique) as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.” This is an expansive definition that threatens, as Langdon Winner puts it, to make everything technology and technology everything. But Winner is willing to defend Ellul’s usage against its critics. In Winner’s view, Ellul’s expansive definition of technology rightly points to a “vast, diverse, ubiquitous totality that stands at the center of modern culture.”

Although Winner acknowledges the weaknesses of Ellul’s sprawling work, he is, on the whole, sympathetic to Ellul’s critique of technological society. Ellul believed that technology was autonomous in the sense that it dictated its own rules and was resistant to critique. “Technique has become autonomous,” Ellul concluded, “it has fashioned an omnivorous world which obeys its own laws and which has renounced all tradition.”

Additionally, Ellul claimed that technique “tolerates no judgment from without and accepts no limitation.” Moreover, “The power and autonomy of technique are so well secured that it, in its turn, has become the judge of what is moral, the creator of a new morality.” Ellul’s critics have noted that in statements such as these, he has effectively personified technology/technique. Winner thinks that this is exactly the case, but in his view this is not an unintended flaw in Ellul’s argument, it is his argument: “Technique is entirely anthropomorphic because human beings have become thoroughly technomorphic. Man has invested his life in a mass of methods, techniques, machines, rational-productive organizations, and networks. They are his vitality. He is theirs.”

And here is the relevant point for the purposes of this post: Elllul claims that he is not a technological determinist.

By this he means that technology did not always hold society hostage, and society’s relationship to technology did not have to play out the way that it did. He is merely diagnosing what is now the case. He points to ancient Greece and medieval Europe as two societies that kept technology in its place as it were, as means circumscribed and directed by independent ends. Now as he sees it, the situation is reversed. Technology dictates the ends for which it alone can be the means. Among the factors contributing to this new state of affairs, Ellul points to the rise of individualism in Western societies. The collapse of mediating institutions fractured society, leaving individuals exposed and isolated. Under these conditions, society was “perfectly malleable and remarkably flexible from both the intellectual and material points of view,” consequently “the technical phenomenon had its most favorable environment since the beginning of history.”

This last consideration is often forgotten by critics of Ellul’s work. In any case, it is in my view, a point that is tremendously relevant to our contemporary discussions of technological innovation. As I put it yesterday, our focus on technological innovation as the key to the future is a symptom of a society in thrall to technique. Our creative and imaginative powers are thus constrained and caught in a loop of diminishing returns.

I hasten to add that this is surely not the whole picture, but it is, I think, an important aspect of it.

One final point related to my comments about our Enlightenment heritage. It is part of that heritage that we transformed technology into an idol of the god we named Progress. It was a tangible manifestation of a concept we deified, took on faith, and in which we invested our hope. If there is a palpable anxiety and reactionary defensiveness in our discussions about the possible stalling of technological innovation, it is because, like the prophets of Baal, we grow ever more frantic and feverish as it becomes apparent that the god we worshipped was false and our hopes are crushed. And it is no small things to have your hopes crushed. But idols always break the hearts of their worshippers, as C.S. Lewis has put it.

Technology will not save us. Paradoxically, the sooner we realize that, the sooner we might actually begin to put it to good use.

What Are We Talking About When We Talk About Technology?

I’ve been of two minds with regards to the usefulness of the word technology. One of those two minds has been more or less persuaded that the term is of limited value and, worse still, that it is positively detrimental to our understanding of the reality it ostensibly labels. The most thorough case for this position is laid out in a 2010 article by the historian of technology Leo Marx, “Technology: The Emergence of a Hazardous Concept.

Marx worried that the term technology was “peculiarly susceptible to reification.” The problem with reified phenomenon is that it acquires “a ‘phantom-objectivity,’ an autonomy that seems so strictly rational and all-embracing as to conceal every trace of its fundamental nature: the relation between people.” This false aura of autonomy leads in turn to “hackneyed vignettes of technologically activated social change—pithy accounts of ‘the direction technology is taking us’ or ‘changing our lives.’” According to Marx, such accounts are not only misleading, they are also irresponsible. By investing “technology” with causal power, they distract us from “the human (especially socioeconomic and political) relations responsible for precipitating this social upheaval.” It is these relations, after all, that “largely determine who uses [technologies] and for what purposes.” And, it is the human use of technology that makes all the difference, because, as Marx puts it, “Technology, as such, makes nothing happen.”[1]

As you might imagine, I find that Marx’ point compliments a critique of what I’ve called Borg Complex rhetoric. It’s easier to refuse responsibility for technological change when we can attribute it to some fuzzy, incohate idea of technology, or worse, what technology wants. That latter phrase is the title of a book by Kevin Kelly, and it may be the best example on offer of the problem Marx was combatting in his article.

But … I don’t necessarily find that term altogether useless or hazardous. For instance, some time ago I wrote the following:

“Speaking of online and offline and also the Internet or technology – definitions can be elusive. A lot of time and effort has been and continues to be spent trying to delineate the precise referent for these terms. But what if we took a lesson from Wittgenstein? Crudely speaking, Wittgenstein came to believe that meaning was a function of use (in many, but not all cases). Instead of trying to fix an external referent for these terms and then call out those who do not use the term as we have decided it must be used or not used, perhaps we should, as Wittgenstein put it, ‘look and see’ the diversity of uses to which the words are meaningfully put in ordinary conversation. I understand the impulse to demystify terms, such as technology, whose elasticity allows for a great deal of confusion and obfuscation. But perhaps we ought also to allow that even when these terms are being used without analytic precision, they are still conveying sense.”

As you know from previous posts, I’ve been working through Langdon Winner’s Autonomous Technology (1977). It was with a modicum of smug satisfaction, because I’m not above such things, that I read the following in Winner’s Introduction:

“There is, of course, nothing unusual in the discovery that an important term is ambiguous or imprecise or that it covers a wide diversity of situation. Wittgenstein’s discussion of ‘language games’ and ‘family resemblances’ in Philosophical Investigations illustrates how frequently this occurs in ordinary language. For many of our most important concepts, it is futile to look for a common element in the phenomena to which the concept refers. ‘Look and see and whether there is anything common to all.–For if you look at them you will not see something that is common to all, but similarities, relationships, and a whole series of them at that.'”

Writing in the late ’70s, Winner claimed, “Technology is a word whose time has come.” After a period of relative neglect or disinterest, “Social scientists, politicians, bureaucrats, corporate managers, radical students, as well as natural scientists and engineers, are now united in the conclusion that something we call ‘technology’ lies at the core of what is most troublesome in the condition of our world.”

To illustrate, Winner cites Allen Ginsburg — “Ourselves caught in the giant machine are conditioned to its terms, only holy vision or technological catastrophe or revolution break ‘the mind-forg’d manacles.'” — and the Black Panthers: “The spirit of the people is greater than the man’s technology.”

For starters, this is a good reminder to us that we are not the first generation to wrestle with the place of technology in our personal lives and in society at large. Winner was writing almost forty years ago, after all. And Winner rightly points out that his generation was not the first to worry about such matters either: “We are now faced with an odd situation in which one observer after another ‘discovers’ technology and announces it to the world as something new. The fact is, of course, that there is nothing novel about technics, technological change, or advanced technological societies.”

While he thinks that technology is a word “whose time has come,” he is not unaware of the sorts of criticisms articulated by Leo Marx. These criticisms had then been made of the manner in which Jacques Ellul defined technology, or, more precisely, la technique: “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.”

Against Ellul’s critics, Winner writes, “While Ellul’s addition of ‘absolute efficiency’ may cause us difficulties, his notion of technique as the totality of rational methods closely corresponds to the term technology as now used in everyday English. Ellul’s la technique and our technology both point to a vast, diverse, ubiquitous totality that stands at the center of modern culture.”

It is at this point that Winner references Wittgenstein in the paragraph cited above. He then acknowledges that the way in which technology tends to be used leads to the conclusion that “technology is everything and everything is technology.” In other words, it “threatens to mean nothing.”

But Winner sees in this situation something of interest, and here is where I’m particularly inclined to agree with him against critics like Leo Marx. Rather than seek to impose a fixed definition or banish the term altogether, we should see in this situation “an interesting sign.” It should lead us to ask, “What does the chaotic use of the term technology indicate to us?”

Here is how Winner answers that question: “[…] the confusion surrounding the concept ‘technology’ is an indication of a kind of lag in public language, that is, a failure of both ordinary speech and social scientific discourse to keep pace with the reality that needs to be discussed.”There may be a better way, but “at present our concepts fail us.”

Winner follows with a brief discussion of the unthinking polarity into which discussions of technology consequently fall: “discussion of the political implications of advanced technology have a tendency to slide into a polarity of good versus evil.” We might add that this is not only a problem with discussion of the political implications of advanced technology, it is also a problem with discussions of the personal implications of advanced technology.

Winner adds that there “is no middle ground to discuss such things,” we encounter either “total affirmation” or “total denial.” In Winner’s experience, ambiguity and nuance are hard to come by and any criticism, that is anything short of total embrace, meets with predictable responses: “You’re just using technology as a whipping boy,” or “You just want to stop progress and send us back to the Middle Ages with peasants dancing on the green.”[2]

While it may not be as difficult to find more nuanced positions today, in part because of the sheer quantity of easily accessible commentary, it still seems generally true that most popular discussions of technology tend to fall into either the “love it” or “hate it” category.

In the end, it may be that Winner and Marx are not so far apart after all. While Winner is more tolerant of the use of technology and finds that, in fact, its use tells us something important about the not un-reasonable anxieties of modern society, he also concludes that we need a better vocabulary with which to discuss all that gets lumped under the idea of technology.

I’m reminded of Alan Jacobs’ oft-repeated invocation of Bernard Williams’s adage, “We suffer from a poverty of concepts.” Indeed, indeed. It is this poverty of concepts that, in part, explains the ease with which discussions of technology become mired in volatile love it or hate it exchanges. A poverty of concepts short circuits more reasonable discussion. Debate quickly morphs into acrimony because in the absence of categories that might give reason a modest grip on the realities under consideration the competing positions resolve into seemingly subjective expressions of personal preference and, thus, criticism becomes offensive.[3]

So where does this leave us? For my part, I’m not quite prepared to abandon the word technology. If nothing else it serves as a potentially useful Socratic point of entry: “So, what exactly do you mean by technology?” It does, to be sure, possess a hazardous tendency. But let’s be honest, what alternatives do we have left to us? Are we to name every leaf because speaking of leaves obscures the multiplicity and complexity of the phenomena?

That said, we cannot make do with technology alone. We should seek to remedy that poverty of our concepts. Much depends on it.

Of course, the same conditions that led to the emergence of the more recent expansive and sometimes hazardous use of the word technology are those that make it so difficult to arrive at a richer more useful vocabulary. Those conditions include but are not limited to the ever expanding ubiquity and complexity of our material apparatus and of the technological systems and networks in which we are enmeshed. The force of these conditions was first felt in the wake of the industrial revolution and in the ensuing 200 years it has only intensified.

To the scale and complexity of steam-powered industrial machinery was added the scale and complexity of electrical systems, global networks of transportation, nuclear power, computers, digital devices, the Internet, global financial markets, etc. To borrow a concept from political science, technical innovation functions like a sort of ratchet effect. Scale and complexity are always torqued up, never released or diminished. And this makes it hard to understand this pervasive thing that we call technology.

For some time, through the early to mid-twentieth century we outsourced this sort of understanding to the expert and managerial class. The post-war period witnessed a loss of confidence in the experts and managers, hence it yielded the heightened anxiety about technology that Winner registers in the ’70s. Three decades later, we are still waiting for new and better forms of understanding.

______________________________________

[1] I’m borrowing the bulk of this paragraph from an earlier post.

[2] Here again I found an echo in Winner of some of what I had also concluded.

[3] We suffer not only from a poverty of concepts, but also, I would add, from a poverty of narratives that might frame our concepts. See the end of this post.

What’s Really At Stake When We Debate Technology

What does the critic love? More specifically, what does the critic of technology love? This question presented itself to me while I was thinking about some comments left on a recent post. The comment questioned whether mourning or celebrating technology was proper to the role of the critic. Naturally, I wrote about it. It was, as is often the case, an exercise in clarifying my thoughts through writing.

I’ve thought some more about the work of technology criticism today, and again it was thanks to some online interactions. Let me put these before you and then offer a few more thoughts on the matter.

Evan Selinger tweeted the following:

And then:

Selinger, whose doctoral work was advised by Don Idhe, echoes Ihde who wrote:

“… I would say the science critic would have to be a well-informed, indeed much better than simply well-informed amateur, in its sense as a ‘lover’ of the subject matter, and yet not the total insider …  Just as we are probably worst at our own self-criticism, that move just away from self-identity is needed to position the critical stance. Something broader, something more interdisciplinary, something more ‘distant’ is needed for criticism.”

Later on, Nathan Jurgenson made the following comment during an exchange about his recent essay:

“while it is technically true there has been a “loss” of sorts, i think it might be better to say at this juncture there has been a “change”; a change in how our reality has been augmented over time via various information technologies.”

I replied:

“Change” is certainly a more value-neutral way of putting it than “loss,” and depending on rhetorical context it certainly has its strengths. Sorting better and worse, of course, entails a normative framework of some sort, etc.

All of this began coalescing in my mind and what follows are some of the conclusions that emerged. Of course, I’m not claiming that these conclusions are necessarily entailed by the comments of others above. As the “Acknowledgements” in books always put it: I’m indebted to these, but any errors of fact or judgment are mine.

There is a reason why, as Selinger and Ihde put it, each in their own way, the critic must be something of an outsider. Criticism of technology, if it moves beyond something like mere description and analysis, implies making what amount to moral and ethical judgments. The critic of technology, if they reach conclusions about the consequences of technology for the lives of individual persons and the lives of institutions and communities, will be doing work that necessarily carries ethical implications.

In this they are not altogether unlike the music critic or the literary critic who is excepted to make judgments about the merits of a work art given the established standards of their field. These standards take shape within an established and institutionalized tradition of criticism. Likewise, the critic of technology — if they move beyond questions such as “Does this technology work?” or “How does this technology work?” to questions such as “What are the social consequences of this technology?” — is implicated in judgments of value and worth. Judgments, it might be argued, of greater consequence than those of the art or literary critic.

But according to what standards and from within which tradition? Not the standards of “technology,” if such could even be delineated, because these would merely be matters of efficiency and functionality (although even these are not exactly “value neutral”). It was, for example, a refusal to evaluate technology on its own terms that characterized the vigorous critical work of the late Jacques Ellul. As Ellul saw it, technology had achieved its nearly autonomous position in society because it was shielded from substantive criticism — criticism, that is, which refused to evaluate technology by its own standards. The critic of technology, then, proceeds with an evaluative framework that is independent of the logic of “technoscience,” as Ihde called it, and so they becomes an outsider to the field.

The libertarian critic, the Marxist critic, the Roman Catholic critic, the posthumanist critic, and so on — each advances their criticism of technology from the perspective of their ethical commitments. Their criticism of technology flows from their loves. Each criticizes technology according to the larger moral and ethical framework implied by the movements, philosophies, and institutions that have shaped their identity. And, of course, so it must be. There is no avoiding this, and there is nothing particularly undesirable about this state of affairs. It is true that prior to reaching conclusions about the moral and ethical consequences of technology, careful and patient work needs to be done to understand technology. But I suspect this work of understanding, particularly because it can be arduous, is typically driven by some deeper commitment that lends urgency and passion to the critic’s work.

Such commitments are often veiled for the sake of appearing appropriately objective and neutral within certain rhetorical contexts that demand as much, the academy for example.  But I suspect that there are times when debates about the merits of technology would be advanced if the participants would acknowledge the tacit ethical frameworks that underlie the positions being staked out. And this is because, In such cases, the technology in question is only a proxy for something else — the object of the critic’s love.