DNA Kits, Alchemy, and the Essence of Technology

“Prospective roommates receive a Roommate DNA kit, provide a saliva sample and take an online personality test; in return they’re shown how their DNA influences their personalities and receive suggestions on the perfect blend of characters the individual should live with.”

That’s a bit of text taken from the screen shot of an email that Zeynep Tufekci tweeted yesterday. Here’s the the whole thing.


You can read more about the venture here.

Responding to Tufekci, Cathy Davidson commented, “Imagine what scientists of the future (if there IS a future) will say about the ridiculous alchemies of data ‘science’ of the first quarter of the 21st century . . .” To which David Perry replied, “alchemy is a good word choice there. I like phrenology, but I think driving the analogies back a few centuries works too.”

What if, however, we’re not really talking about an analogy, but something more akin to an inherited family resemblance? What if alchemy is more directly and more closely related to what we think of as science and technology?

Historians of science and technology, will confirm that this is exactly the case. The most elegant formulation of this connection is provided by C.S. Lewis, who, while not a historian of science, was, as a scholar of medieval and Renaissance literature, intimately familiar with the relevant intellectual history. In The Abolition of Man, Lewis observed,

I have described as a `magician’s bargain’ that process whereby man surrenders object after object, and finally himself, to Nature in return for power. And I meant what I said. The fact that the scientist has succeeded where the magician failed has put such a wide contrast between them in popular thought that the real story of the birth of Science is misunderstood. You will even find people who write about the sixteenth century as if Magic were a medieval survival and Science the new thing that came in to sweep it away. Those who have studied the period know better. There was very little magic in the Middle Ages: the sixteenth and seventeenth centuries are the high noon of magic. The serious magical endeavour and the serious scientific endeavour are twins: one was sickly and died, the other strong and throve. But they were twins. They were born of the same impulse. I allow that some (certainly not all) of the early scientists were actuated by a pure love of knowledge. But if we consider the temper of that age as a whole we can discern the impulse of which I speak.

There is something which unites magic and applied science while separating both from the wisdom of earlier ages. For the wise men of old the cardinal problem had been how to conform the soul to reality, and the solution had been knowledge, self-discipline, and virtue. For magic and applied science alike the problem is how to subdue reality to the wishes of men: the solution is a technique ….

Lewis Mumford and Jacques Ellul have made similar observations and drawn similar conclusions.

This connection is important if we are to understand not merely the nature of a given technology but the spirit that animates its creation and renders its adoption plausible or even desirable.

As Heidegger* famously taught, the essence of technology is nothing technological. This is to say that we won’t really understand technology if we focus on discreet technological artifacts. Something that is not, in that sense, technological animates and sustains the particular direction the technological project has taken in the modern societies.

We were closer to the mark, in his view, if we approached the question of technology by the path of revealing. There is a mode of revealing, of making the world appear to us, that arises from and drives the development modern technology. The essence of technology is a way of seeing or construing the world—as standing reserve, as raw material for projects of willful mastery and exploitation—that yields a particular kind of knowledge: instrumental knowledge, or, if you like, weaponized knowledge.

This way of seeing, and the knowledge that flows from it, generates results, but it also locks us into a narrow field of vision, and, perniciously, veils its true nature from us. We fail to grasp that what we see is conditioned and narrowed in this manner; we take it for granted that this is the only way to construe the world. Swaths of experience become invisible or intelligible to us, the lure of certain kinds of action becomes irresistible, ways of being in the world are foreclosed.

In the passage cited earlier, Lewis, in my view, tracks with Heidegger’s diagnosis (independently so, as far as I know) and draws out an important implication: the reach of this particular way of seeing finally extends to how we understand what it means to be a human being. What begins as a project to subdue nature ends with the subduing of the human being. We become the final frontier of our own making and exploiting. (Lewis points out that what this really means is, of course, the power of a few, Lewis calls them the Conditioners, over the many.)

The impulse to master and the confidence (faith, really) in technique are apparent in the suggestion that a DNA test will help you find the ideal roommate. This example, taken by itself, is rather trivial. Outbursts of enthusiasm around certain techniques emerge and fade throughout the history of technology, and marketers, master technicians themselves, take note. But this is besides the point. The point is that this tool, outlandish as it may still appear to some, partakes of, reveals, and reinforces a pattern that is prior to and more significant than this one technique and the use to which it is put.

Altogether this suggests a useful question or two: Upon what assumptions about human nature does my use of a technology depend? Or, from another direction, What assumptions about human nature does my use of a technology engender?

(A bit more about this in the next installment of The Convivial Society.)

* I grow increasingly uncomfortable making use of Heidegger’s work as it has become clear that his ties to the National Socialists ran deeper than his defenders have argued. Something like Ellul’s verdict seems right: “As early as 1934, Ellul was aware of Heidegger’s political views and concluded, as his long-time interviewer Patrick Troude-Chastenet writes, ‘that someone who made such gross errors of judgment in political thinking could be of no avail to him in his search for an understanding of the world in which we live.'” But for now, there it is. The line is from “The Question Concerning Technology,” which remains a canonical text in the philosophy of technology.

Tip the Writer


Why We Can’t Have Humane Technology

Back in 2014, I wrote a post called “Do Artifacts Have Ethics?” Yes, of course, was the answer, and I offered forty-one questions formulated to help us think about an artifact’s complex ethical dimensions. A few months later, in 2015, I wrote about what I called Humanist Tech Criticism.

You would think, then, that I’d be pleasantly surprised by the recent eruption of interest in both ethics of technology and humane technology. I am, however, less than enthusiastic but also trying not to be altogether cynical. I’ll elaborate in just a moment, but, first, here is a sampling of the spate of recent stories, articles, and essays about the twin themes of humanist technology and ethics of technology.

“Tech’s Ethical ‘Dark Side’: Harvard, Stanford and Others Want to Address It”
“Ethical Tech Will Require a Grassroots Revolution”
“The Internet of Things Needs a Code of Ethics”
“Early Facebook and Google Employees Form Coalition to Fight What They Built”
“The Big Tech Backlash Is Reaching a Tipping Point”
“Tech Backlash Grows as Investors Press Apple to Act on Children’s Use”
“Sean Parker unloads on Facebook”
“The Tech Humanist Manifesto”
“No One Is Coming. It Is Up To Us”
“Dear Zuck (and Facebook Product Teams)”
“The tech insiders who fear a smartphone dystopia”

You could easily find another two dozen stories and more just like these that focus on a set of interrelated topics: former Silicon Valley executives and designers lamenting their earlier work, current investors demanding more ethically responsible devices (especially where children are concerned), general fretting over the consequences of social media (especially Facebook) on our political culture, and reporting on the formation of the Center for Humane Technology.

As you might imagine, there have been critics of this recent ethical awakening among at least a few of the well-connected in Silicon Valley. Among the more measured are Maya Indira Ganesh’s two posts on Cyborgology and this post on the always insightful  Librarian Shipwreck. Both are critical and suspicious of these new efforts, particularly the Center for Humane Technology. But neither forecloses the possibility that some good may yet come of it all. That seems a judicious approach to take.

Others, of course, have been even more critical, focusing on what can be interpreted as a rather opportunistic change of tune at little to no personal cost for most of these Silicon Valley executives and designers. This cynicism is not altogether unwarranted. I’m tempted by this same cynicism but it also appears to me that it cannot be justly applied indiscriminately to all of those individuals working for a more ethical and humane tech industry.

My concerns lie elsewhere. I’ve expressed some of them before. What they amount to, mostly, is this: all efforts to apply ethics to technology or to envision a more humane technology will flounder because there is no robust public consensus about either human flourishing or ethical norms.

Moreover, technology* is both cause and symptom of this state of affairs: it advances, more or less unchecked, precisely because of this absence while it progressively undermines the plausibility of such a consensus. Thus, we are stuck in a vicious cycle generated by a reinforcing confluence of political, cultural, economic, and technological forces.

Most of the efforts mentioned above appear to me, then, to address what amounts to the the tip of the tip of the iceberg. That said, I want to avoid a critical hipsterism here—”I’m aware of problems so deep you don’t even realize they exist.” And I also do not want to suggest that any attempt at reform is useless unless it addresses the problem in its totality. But it may also be the case that such efforts, arising from and never escaping the more general technological malaise, only serve to reinforce and extend the existing situation. Tinkering with the apparatus to make it more humane does not go far enough if the apparatus itself is intrinsically inhumane.

Meaningful talk about ethics and human flourishing in connection with technology, to say nothing of meaningful action, might only be possible within communities that can sustain both a shared vision the good life and the practices that embody such a vision. The problem, of course, is that our technologies operate at a scale that eclipses the scope of such communities.

In “Friday’s Child,” Auden makes the parenthetical observation, “When kings were local, people knelt.” Likewise, we might say that when technology was local, people ruled. Something changed once technology ceased to be local, that is to say once it evolved into complex systems that overlapped communities, states, countries, and cultures. Traditional institutions and cultural norms were no longer adequate. They could not scale up to keep pace with technology because their natural habitat was the local community.

A final set of observations: Modern technology, in the broadest sense we might imagine the phenomena, closer to what Ellul means by technique, is formative. It tacitly conveys an anthropology, an understanding of what it means to be a human being. It does so in the most powerful way possible: inarticulately, as something more basic than a worldview or an ideology. It operates on our bodies, our perception, our habits; it shapes our imagination, our relationships, our desires.

The modern liberal order abets technology’s formative power to the degree that it disavows any strong claims about ethics and human flourishing. It is in the space of that disavowal that technology as an implicit anthropology and an implicit politics takes root and expands, framing and conditioning any subsequent efforts to subject it to ethical critique. Our understanding of the human is already conditioned by our technological milieu. Fundamental to this tacit anthropology, or account of the human, is the infinite malleability of human nature. Malleable humanity is a precondition to the unfettered expansion of technology. (This is why transhumanism is the proper eschatology of our technological order. Ultimately, humanity must adapt and conform, even if it means the loss of humanity as we have known it. As explicit ideology, this may still seem like a fringe position; as implicit practice, however, it is widely adopted.)

All of this accounts for why previous calls for more humane technology have not amounted to much. And this would be one other quibble I have with the work of the Center for Human Technology and others calling for humanistic technology: thus far there seems to be little awareness of or interest in a longstanding history of tech criticism that should inform their efforts. Again, this is not about critical hipsterism, it is about drawing on a diverse intellectual tradition that contains indispensable wisdom for anyone working toward more ethical and humane technology. Maybe that work is still to come. I hope that it is.

*I confess I’m using the word technology in a manner to which I myself sometimes object, as shorthand for a network of artifacts, techniques, implicit values, and political/industrial concerns. While we’re at it, I realize as well that pretty much every claim I make in this post is in need of substantial elaboration and support. Unfortunately, this rather dense, unpacked writing is what happens when I can only manage to write in the gaps afforded by my current state of affairs.

Relatedly: Democracy and Technology and One Does Not Simply Add Ethics To Technology.

Reminder: You can subscribe to my roughly twice-monthly newsletter, The Convivial Society, here.


Tip the Writer


Coming Soon: The Convivial Society, a Newsletter

I’ve had a Facebook page for this blog for a few years. I began using Twitter in 2011. For a brief while I experimented with Tumblr. In each case, the idea was to find an audience for what I wrote here. Lately, I’ve been rethinking my use of both Facebook and Twitter for this purpose.

Regarding Facebook, it no longer seems consistent for me to maintain a presence there. It’s the sort of inconsistency we ordinarily tend to live with, begrudgingly, because we imagine that we accrue some slight net benefit. I don’t even imagine as much, so, at no great cost to myself, it’s time to let that go.

Regarding Twitter, for most of the time that I’ve used the platform, I’ve done so awkwardly and half-heartedly. More recently, I’ve been more engaged with the platform, enjoyed more interactions, and have found that its use has come to feel a bit more natural. I’m not entirely pleased with the consequences. I find that if I imagine myself to be moderately well-informed about the negative effects of a technology, I’m tempted to imagine myself immune to them. Of course, this is far from the case. That said, I’m cutting back significantly on my use of Twitter.

While making these choices, I’ve also been thinking about alternatives ways of reaching an audience, something, of course, which I imagine most people that write care about a little. The end of that thinking led me to launch a newsletter. It seems at once more consistent and more effective. The newsletter is a simple, non-coercive tool: it arrives unfailingly until you no longer want it.

I’ve titled the newsletter The Convivial Society. The title is a nod to both Jacques Ellul and Ivan Illich, authors, respectively, of The Technological Society and Tools for Conviviality. The first is a thoroughgoing critique of a society given over to what Ellul called technique, which included but was not limited to technology. The second, while also deeply critical of industrial society and its technology, offered a way to imagine a world where our tools served more humane ends.

Together, they embodied the kind of technology criticism I think we urgently need.

The newsletter, as I envision it right now, will feature important news items and essays related to technology, links to what I post here, and readings from the tech critical canon. I hope to include not only tech criticism but also whatever might help us to imagine alternative ways of being with technology.

As for its frequency, only time will tell. I aim to send out the inaugural installment late next week. You can subscribe here: The Convivial Society. Please feel free to share the link, of course. I’d like to imagine the newsletter being a useful source for anyone who wants to think more critically about technology and society.

Finally, I am open to suggestions and feedback.

Posting will continue here as per usual.

A Proper Education

Wendell Berry, writing not long after September 11, 2001:

The complexity of our present trouble suggests as never before that we need to change our present concept of education. Education is not properly an industry, and its proper use is not to serve industries, either by job-training or by industry-subsidized research. It’s proper use is to enable citizens to live lives that are economically, politically, socially, and culturally responsible. This cannot be done by gathering or “accessing” what we now call “information” – which is to say facts without context and therefore without priority. A proper education enables young people to put their lives in order, which means knowing what things are more important than other things; it means putting first things first.

How (Not) to Learn From the History of Technology

One of the first things I wrote on this blog nearly eight years ago, on an “About” page that has since been significantly abbreviated, was that we should aim at neither unbridled enthusiasm for technology nor thoughtless pessimism. Obviously no one is going to accuse me of unbridled enthusiasm for technology. While I may more justly be accused of a measure of pessimism, which in my view is not altogether unwarranted, I trust it does not come across as thoughtless.

So what I would like to know, from those who tend to be on the other side of the divide, is what constitutes, in their view, legitimate expressions of concern or worry that will not be dismissed with handwaving rhetorical gestures about how people have always worried about new technologies, etc., etc. (For starters, let us set aside the words “worry” and “concern” altogether. They too readily evoke the image of fainting couches and give off the odor of smelling salts.)

Consider Zachary Karabell’s piece for Wired, “Demonized Smartphones Are Just Our Latest Technological Scapegoat.” Karabell is responding to a series of recent articles exploring the fraught relationship between children and smartphones. Most notably, two groups of Apple investors publicly called on the company to take action to combat smartphone addiction among children.¹ Karabell cites a handful of other examples.

I have tried to read Karabell carefully and sympathetically, but I am not entirely clear what I am to take from his piece. Chiefly, it seems he simply felt the need to bring some calm to what he perceived to be a panic about technology. (Given the title of the article, however, I’m not sure it’s the critics who are panicking: smartphones aren’t being criticized, they are being demonized!)

The first move in this direction is to remind us that “Alarm at the corrosive effects of new technologies is not new” followed by obligatory references to Plato’s warning² about writing and the Catholic Church’s response to the printing press after which we get brief mention of similar warnings about a series of other technologies from the telegraph to Grand Theft Auto.

This is an all-too-familiar litany, and my question is always the same: What’s the point?

This question is not meant to be dismissive. It is an honest question. What is the point of the litany? It cannot be, of course, that I should therefore discount the present warnings because this would be a non sequitur as Karabell himself acknowledges. “Just because these themes have played out benignly time and again,” he writes, “does not, of course, mean that all will turn out fine this time.”

Indeed not. And this is so for a reason that is easy to grasp: each technology is different. This is especially the case when we consider the capacities and scale of more recent technologies when compared to earlier examples. Early in his piece, Karabell asked, “Is today’s concern about smartphones any different than other generations’ anxieties about new technology?” The answer seems straightforward to me: Yes, obviously so … because we’re talking about different technologies.

But let’s go a bit further. How sure are we that things “have played out benignly time and again”? How would we know? Is mere survival the bar? If not, what is? By what standard are we to conclude that the impact of these more recent technologies has been altogether benign? As Karabell acknowledges, these earlier complaints are often not wrong; we just don’t care anymore. Should we? Does this say more about our insensibilities than it does about their anxieties? Frankly, I’m increasingly convinced that we must be prepared to ask such questions and consider them with care and imagination.

After we’ve been reminded that we are not the first generation to express a measure of concern about new technologies, we are presented with a brief catalogue of the problems attributed to smartphones. Karabell seems both to believe that these are genuine concerns that should not be ignored and that we are not in a position to give them much weight. It’s an intriguing tension within this piece. It is as if the author understands that he is dealing with valid criticisms but cannot quite bring himself to take them too seriously.

Chiefly, it would seem that Karabell wants us to be open-minded about new technologies. The jury is still out in his view, and we don’t yet know with certainty what the long term effects will be. This paragraph is representative:

Some might say that until we know more, it’s prudent, especially with children, to err on the side of caution and concern. There certainly are risks. Maybe we’re rewiring our brains for the worse; maybe we’re creating a generation of detached drones. But there also may be benefits of the technology that we can’t (yet) measure.

It’s hard for me to read that and draw any firm conclusions as to what Karabell thinks we ought to do. Which is fine. I don’t always know what to do regarding the stuff I write about. But this piece ostensibly aims at relieving concerns and dismissing warnings; I’m not sure it succeeds, at least I don’t see that it give us any grounds to be relieved or to set warnings aside.

What’s more, if things do in fact play out benignly (assuming that everyone affected could agree on what that might mean), it would seem to me that the warnings and criticisms would be at least part of the reason why. Writers like Karabell assume that whatever early turbulence a new technology causes for a society, in time the society will right itself and cruise along smoothly. You would think, then, that such writers would enthusiastically welcome criticisms of new technologies in order to figure out how to steer through the turbulent period as quickly as possible. But this is rarely the case; they are merely annoyed.

It’s rarely the case because these technologies are often proxies for something much larger, something more like a worldview, an ideology, or a moral framework. Technology is code for Modernity or Progress or Reason, so to call a technology into question is to call these deeper values and commitments into question. Karabell’s closing paragraphs reveal as much.

“More than not,” he writes, “the innovations we call ‘technology’ have transformed and ameliorated the human conditions. There may have been some loss of community, connection to the land, and belonging; even here, we tend to forget that belonging almost meant exclusion for those who didn’t fit or didn’t believe what their neighbors did.”

It is difficult to overestimate the degree to which those sentences unwittingly betray a host of moral judgments the author seems unable to perceive as such. “[L]oss of community, connection to the land, and belonging”—they are casually listed off as if the author has only heard rumors of people that care about such things and can’t quite fathom such attachments.

“The smartphone is today’s emblem of whether one believes in progress or decline,” Karabell writes in his last paragraph.

Maybe that’s just too much of a burden to put on a technology, any technology. Maybe progress and decline shouldn’t be measured exclusively by technological innovation. Maybe it is not the critic who needs to be admonished to consider new technologies with an open mind.


¹A note about the term “addiction.” It’s not altogether clear that this is a useful way of characterizing how anyone relates to specific technologies. That there is a measure of compulsion seems clear enough, though. For my part, I prefer much older language of order and disorder. We can, I think, speak of disordered relationships without recourse to clinical terminology. The language of order and disorder is broader and implies a moral framework that extends beyond the healthy/unhealthy paradigm.

²This may come off as pedantic, but I’d rather not read any more passing mentions of Plato/Socrates on writing and memory unless they are accompanied by some evidence that the author has actually read and grappled with the Phaedrus.