The Inhumanity of Smart Technology

I’m allergic to hyperbole. That said, Evgeny Morozov identifies one of the most important challenges we face in the coming years:

“There are many contexts in which smart technologies are unambiguously useful and even lifesaving. Smart belts that monitor the balance of the elderly and smart carpets that detect falls seem to fall in this category. The problem with many smart technologies is that their designers, in the quest to root out the imperfections of the human condition, seldom stop to ask how much frustration, failure and regret is required for happiness and achievement to retain any meaning.

It’s great when the things around us run smoothly, but it’s even better when they don’t do so by default. That, after all, is how we gain the space to make decisions—many of them undoubtedly wrongheaded—and, through trial and error, to mature into responsible adults, tolerant of compromise and complexity.”

Exactly right.

“Out of the crooked timber of humanity no straight thing was ever made,” Kant observed. Corollary to keep in mind: If a straight thing is made, it will be because humanity has been stripped out of it.

What is the endgame of the trajectory of innovation that is determined to eliminate human error, deviance, and folly? In every field of human endeavor — whether it be industry, medicine, education, governance — technological innovation reduces human involvement, thought, and action in the name of precision, efficiency, and effectiveness.

Morozov’s forthcoming book, To Save Everything, Click Here: The Folly of Technological Solutionism, targets what he has called “solutionism,” the temptation, I take it without having read the book yet, to view the Internet as the potential solution to every conceivable problem. I’m tempted to suggest for Morozov the target of his next book: eliminationism — the progressive elimination of human thought and action wherever possible. Life will increasingly consist of automated processes, actions, and interactions that will envelope and frame the human and render the human superfluous. Worse yet, insofar as the human is ultimately the root of our inconveniences and our problems, solutionism’s ultimate trajectory must lead to eliminationism.

There are tragic associations haunting that last formulation, so let me be clear. It is not (necessarily) the elimination of human beings that I’m worried about; it is the elimination of our humanity. The fear — and why not, let’s embrace its most popular cultural icon — is that we will be rendered zombies: alive but not living, stripped of the possibility for error, risk, failure, triumph, joy, redemption, and much of what renders our lives tragically, gloriously meaningful.

Albert Borgmann had it right. We must distinguish between “trouble we reject in principle and accept in practice and trouble we accept in practice and in principle.” In the former category, Borgmann has in mind troubles on the order of car accidents and cancer.  By “accepting them in practice,” Borgmann means that at the personal level we must cope with such tragedies when they strike. But these are troubles that we oppose in principle, and so we seek cures for cancer and improved highway safety.

wall-e

Against these, Borgmann opposes troubles that we also accept in practice, but ought to accept in principle as well. Here the examples are preparation of a meal and hiking a mountain.  These sorts of troubles, sometimes not without their real dangers, could be opposed in principle — never prepare meals at home, never hike — but such avoidance would also prevent us from experiencing their attendant joys and satisfactions. If we seek to remove all trouble or risk from our lives; if we always opt for convenience, efficiency, and ease; if, in other words, we aim indiscriminately at the frictionless life; then we simultaneously rob ourselves of the real satisfactions and pleasures that enhance and enrich our lives — that, in fact, make our lives fully human.

Huxley had it right, too:

“But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.”

“In fact,” said Mustapha Mond, “you’re claiming the right to be unhappy.”

“All right then,” said the Savage defiantly, “I’m claiming the right to be unhappy.”

In claiming the right to be unhappy, the Savage was claiming the right to a fully human existence. It is a right we must take increasing care to safeguard against our own fascination with the promises of technology.

MOOCs: Market Solution for Market Problems?

Of the writing about MOOCs there seems to be no end … but here is one more thought. Many of the proponents of MOOCs and online education* in general couch their advocacy in the language of competition, market forces, disrupting** the educational cartels***, etc. They point, as well, to the rising costs of a college education — costs which, in their view, are unjustifiable because they do not yield commensurate rewards.**** MOOCs introduce innovation, efficiency, technological entrepreneurship, cost-effectiveness and they satisfy consumer demand. The market to the rescue. But when you look closely at the named culprits these proponents of MOOCs blame for rising costs, they include frivolous spending on items that are not directly related to education such as luxury dorms, massive sporting complexes, state-of-the-art student recreational centers, out-of-control administrative expansion, etc. But why are colleges spending so much money on such things? Because the market logic triumphed. Students became consumers and colleges had to compete for them by offering a host of amenities that, in truth, had little to do with education. Market solutions for market problems, then.***** MOOCs are just the extension of a logic that triumphed long ago. On this score, I’d suggest the universities need to recover something of their medieval heritage, rather than heed the siren songs of the digital age.

_______________________________________________

*I resisted, in the name of the semblance of objectivity, the urge to place ironical quotation marks around education.

**I resisted, in the name of decency, the urge to curse out loud as I typed disruption.

***Heretofore known as universities.

****These rewards are, of course, always understood to be pecuniary.

*****This is the hidden genius of disruption, brought to you by many of the same folks who gave us technological fixes for technological problems.

Eternal Sunshine of the Spotless Digital Archive

The future is stubbornly resistant to prediction, but we try anyway. I’ve been thinking lately about memory, technologies that mediate our memories, and the future of the past.

The one glaring fact — and I think it is more or less incontrovertible — is this: Digital technology has made it possible to capture and store vast amounts a data.

Much of this data, for the average person, involves the documentation of daily life. This documentation is often photographic or audio-visual.

What difference will this make? Recently, I suggested that in an age of memory abundance, memory will be devalued. There will be too much of it and it will be out there somewhere — in a hard drive, on my phone/computer,  or in the cloud. As we confidently and routinely outsource our remembering to digital devices and archives, we will grow relatively indifferent to personal memories. (Although, I don’t think indifferent is the best word — perhaps unattached.)

This too seems to me incontrovertible. It is the overlooked truth in Plato’s much-maligned critique of writing. Externalized memory is only figuratively related to internalized memory.

But I was assuming the permanence of these digital memories. What if our digital archives prove to be impermanent? What if in the coming years and decades we realize that our digital memories are gradually fading into oblivion?

Consider the following from Bruce Sterling: “Actually it’s mostly the past’s things that will outlive us. Things that have already successfully lived a long time, such as the Pyramids, are likely to stay around longer than 99.9% of our things. It might be a bit startling to realize that it’s mostly our paper that will survive us as data, while a lot of our electronics will succumb to erasure, loss, and bit rot.”

It might turn out that Snapchat is a premonition. What then?

Scenario A: Digital memory decay is a technical problem that is eventually solved; trajectory of memory abundance and consequent indifference plays out.

Scenario B: Digital memory decay remains a persistent problem.

Scenario B1: We devote ourselves to rituals of digital memory preservation. Therapy first referred to the care of the gods. We think of it as care for the self, sometimes involving the recollection repressed memories. Perhaps in the future these senses of the word will mutate into therapy understood as the care of our digital memories.

Scenario B2: By the time the problem of digital memory decay is recognized as a threat, we no longer care. Memory, we decide, is a burden. Mutually reinforcing decay and indifference then yield a creeping amnesia of long term memory. Eternal sunshine indeed.

Scenario B3: We reconsider our digital dependence and reintegrate analog and internalized forms of memory into our ecology of remembrance.

Scenario C: All of this is wrong.

In truth, I can hardly imagine a serious indifference to personal memory. But then again, I’m sure those who lived in societies whose cultural forms were devoted to tribal remembrance could hardly imagine serious indifference to the memory of the tribe. They probably couldn’t imagine someone caring much about their individual history; it was likely an incoherent concept. Thinking about the future involves the thinking of that which we can’t quite imagine, or is it the imagining of that which we can’t quite think. In any case, it’s not really about the future anyway. It’s about trying to make some sense of forces now at work and trying to reckon with the long reach of the past, which, remembered or not, will continue to make itself felt in the present.

The Lifestream Stops

David Gelernter, 2013:

“And today, the most important function of the internet is to deliver the latest information, to tell us what’s happening right now. That’s why so many time-based structures have emerged in the cybersphere: to satisfy the need for the newest data. Whether tweet or timeline, all are time-ordered streams designed to tell you what’s new … But what happens if we merge all those blogs, feeds, chatstreams, and so forth? By adding together every timestream on the net — including the private lifestreams that are just beginning to emerge — into a single flood of data, we get the worldstream: a way to picture the cybersphere as a whole … What people really want is to tune in to information. Since many millions of separate lifestreams will exist in the cybersphere soon, our basic software will be the stream-browser: like today’s browsers, but designed to add, subtract, and navigate streams.”

E. M. Forster, 1909:

“Who is it?” she called. Her voice was irritable, for she had been interrupted often since the music began. She knew several thousand people, in certain directions human intercourse had advanced enormously.

But when she listened into the receiver, her white face wrinkled into smiles, and she said:

“Very well. Let us talk, I will isolate myself. I do not expect anything important will happen for the next five minutes-for I can give you fully five minutes …”

She touched the isolation knob, so that no one else could speak to her. Then she touched the lighting apparatus, and the little room was plunged into darkness.

“Be quick!” She called, her irritation returning. “Be quick, Kuno; here I am in the dark wasting my time.”

[Conversation ensues and comes to an abrupt close.]

Vashti’s next move was to turn off the isolation switch, and all the accumulations of the last three minutes burst upon her. The room was filled with the noise of bells, and speaking-tubes. What was the new food like? Could she recommend it? Has she had any ideas lately? Might one tell her one”s own ideas? Would she make an engagement to visit the public nurseries at an early date? – say this day month.

To most of these questions she replied with irritation – a growing quality in that accelerated age. She said that the new food was horrible. That she could not visit the public nurseries through press of engagements. That she had no ideas of her own but had just been told one-that four stars and three in the middle were like a man: she doubted there was much in it.

When I read Gelernter’s piece and his world-stream metaphor (illustration below), I was reminded of Forster’s story and the image of Vashti, sitting in her chair, immersed in cacophonic real-time stream of information. Of course, the one obvious difference between Gelernter’s and Forster’s conceptions of the relentless stream of information into which one plunges is the nature of the interface. In Forster’s story, “The Machine Stops,” the interface is anchored to a particular place. It is an armchair in a bare, dark room from which characters in his story rarely move. Gelernter assumes the mobile interfaces we’ve grown accustomed to over the last several years.

In Forster’s story, the great threat the Machine poses to its users is that of radical disembodiment. Bodies have atrophied, physicality is a burden, and all the ways in which the body comes to know the world have been overwhelmed by a perpetual feeding of the mind with ever more derivative “ideas.” This is a fascinating aspect of the story. Forster anticipates the insights of later philosophers such as Merleau-Ponty and Hubert Dreyfus as well as the many researchers helping us understand embodied cognition. Take this passage for example:

You know that we have lost the sense of space. We say “space is annihilated”, but we have annihilated not space, but the sense thereof. We have lost a part of ourselves. I determined to recover it, and I began by walking up and down the platform of the railway outside my room. Up and down, until I was tired, and so did recapture the meaning of “Near” and “Far”. “Near” is a place to which I can get quickly on my feet, not a place to which the train or the air-ship will take me quickly. “Far” is a place to which I cannot get quickly on my feet; the vomitory is “far”, though I could be there in thirty-eight seconds by summoning the train. Man is the measure. That was my first lesson. Man”s feet are the measure for distance, his hands are the measure for ownership, his body is the measure for all that is lovable and desirable and strong.

But how might Forster have conceived of his story if his interface had been mobile? Would his story still be a Cartesian nightmare? Or would he understand the danger to be posed to our sense of time rather than our sense of place? He might have worried not about the consequences of being anchored to one place, but rather being anchored to one time — a relentless, enduring present.

Were I Forster, however, I wouldn’t change his focus on the body. For isn’t our body and the physicality of lived experience that the body perceives also our most meaningful measure of time? Do not our memories etch themselves in our bodies? Does not a feel for the passing years emerge from the transformation of our bodies? Philosopher Merleau-Ponty spoke of the “time of the body.” Consider Shaun Gallagher’s exploration of Merleau-Ponty’s perspective:

“Temporality is in some way a ‘dimension of our being’ … More specifically, it is a dimension of our situated existence. Merleau-Ponty explains this along the lines of the Heideggerian analysis of being-in- the-world. It is in my everyday dealings with things that the horizon of the day gets defined: it is in ‘this moment I spend working, with, behind it, the horizon of the day that has elapsed, and in front of it, the evening and night – that I make contact with time, and learn to know its course’ …”

Gallagher goes on to cite the following passage from Merleau-Ponty:

“I do not form a mental picture of my day, it weighs upon me with all its weight, it is still there, and though I may not recall any detail of it, I have the impending power to do so, I still ‘have it in hand.’ . . . Our future is not made up exclusively of guesswork and daydreams. Ahead of what I see and perceive . . . my world is carried forward by lines of intentionality which trace out in advance at least the style of what is to come.”

Then Gallagher adds, “Thus, Merleau-Ponty suggests, I feel time on my shoulders and in my fatigued muscles; I get physically tired from my work; I see how much more I have to do. Time is measured out first of all in my embodied actions as I ‘reckon with an environment’ in which ‘I seek support in my tools, and am at my task rather than confronting it.'”

That last distinction between being at my task rather than confronting it seems particularly significant, especially as it involves the support of tools. Our sense of time, like our sense of place, is not an unchangeable given. It shifts and alters through technological mediation. Melvin Kranzberg, in the first of his six laws of technology, reminds us, “Technology is neither good nor bad; nor is it neutral.” Our technological mediation of space and time is never neutral; and while it may not be “bad” or “good” in some abstract sense, it can be more or less humane, more or less conducive to our well-being. If the future of the Internet is the worldstream, we should perhaps think twice before plunging.

Worldstream Gelernter

Borg Complex Case Files 2

UPDATE: See the Borg Complex primer here.

______________________________________

Alright, let’s review. A Borg Complex is exhibited by writers and pundits whenever you can sum up their message with the phrase: “Resistance is futile.”

The six previously identified symptoms of a Borg Complex are as follows:

1. Makes grandiose, but unsupported claims for technology

2. Uses the term Luddite a-historically and as a casual slur

3. Pays lip service to, but ultimately dismisses genuine concerns

4. Equates resistance or caution to reactionary nostalgia

5. Starkly and matter-of-factly frames the case for assimilation

6. Announces the bleak future for those who refuse to assimilate

In an effort to further refine our diagnostic instruments, I am now adding two more related symptoms to the list. The first of these arises from a couple of cases submitted by Nick Carr and is summarized as follows:

7. Expresses contemptuous disregard for the achievements of the past

Consider this claim by Tim O’Reilly highlighted by Carr:

“I don’t really give a shit if literary novels go away. … the novel as we know it today is only a 200-year-old construct. And now we’re getting new forms of entertainment, new forms of popular culture.”

Well, there you have it. This same statement serves to illustrate the second symptom I’m introducing:

8. Refers to historical parallels only to dismiss present concerns.

This symptom identifies historical precedents as a way of saying, “Look, people worried about stuff like this before and they survived, so there’s nothing to be concerned about now.” It sees all concerns through the lens of Chicken Little. The sky never falls. I would suggest that the Boy Who Cried Wolf is the better parable. The wolf sometimes shows up.

To this list of now eight symptoms, we might also add two Causes.

1. Self-interest, usually commercial (ranging from more tempered to crass)

2. Fatalistic commitment to technological determinism (in pessimistic and optimistic varieties)

Now let’s consider some more cases related to education, a field that is particularly susceptible to Borg Complex claims. In a recent column, Thomas Friedman gushed over MOOCs:

“Nothing has more potential to lift more people out of poverty — by providing them an affordable education to get a job or improve in the job they have. Nothing has more potential to unlock a billion more brains to solve the world’s biggest problems. And nothing has more potential to enable us to reimagine higher education than the massive open online course, or MOOC, platforms that are being developed by the likes of Stanford and the Massachusetts Institute of Technology and companies like Coursera and Udacity.”

These are high, indeed utopian hopes, and in their support Friedman offered two heartwarming anecdotes of MOOC success. In doing so, he committed the error of tech-uptopians (modeled on what Pascal took to be the error of Stoicism): Believing the wonderful use to which a technology can be put, will be the use to which it is always put.

Friedman wrapped up his post with the words of M.I.T. president, L. Rafael Reif, “There is a new world unfolding and everyone will have to adapt.”

And there it is — Borg Complex.

In long piece in The Awl (via Nathan Jurgenson), Maria Bustillos discusses an online exchange on the subject of MOOCs between Aaron Bady and Clay Shirky. Take the time to read the whole thing if you’re curious/worried about MOOCs. But here is Shirky on the difference between he and Bady: “Aaron and I agree about most of the diagnosis; we disagree about the prognosis. He thinks these trends are reversible, and I don’t; Udacity could go away next year and the damage is already done.”

Once again — Borg Complex.

At this juncture it’s worth asking, as Jurgenson did in sending me this last case, “Is he wrong?”

Is it not the case that folks who exhibit a Borg Complex often turn out to be right? Isn’t it inevitable that their vision will play out regardless of what we say or do?

My initial response is … maybe. As I’ve said before, a Borg Complex diagnosis is neutral as to the eventual veracity of the claims. It is more about an attitude toward technology in the present that renders the claims, if they are realized, instances of self-fulfilling prophecy.

The appearance of inevitability is a trick played by our tendency to make a neat story out of the past. Historians know this better than most. What looks like inevitability to us is only a function of zooming so far out that all contingencies fade from view. But in the fine-grain texture of lived experience, we know that there are countless contingencies that impinge upon the unfolding of history. It is also a function of forgetfulness. If only we had a catalog of all that we were told was inevitable that didn’t finally transpire. We only remember successes and then we tell their story as if they had to succeed all along, and this lends claims of inevitability an undeserved plausibility.

One other consideration: It is always worth asking, Who stands to profit from tales of technological inevitability?

Consider the conclusion of a keynote address delivered at the Digital-Life-Design Conference in Munich by Max Levchin:

“So to conclude: I believe that in the next decades we will see huge number of inherently analog processes captured digitally. Opportunities to build businesses that process this data and improve lives will abound.”

The “analog processes” Levchin is talking about are basically the ordinary aspects of our human lives that have yet to be successfully turned into digital data points. Here’s Carr on the vision Levchin lays out:

“This is the nightmare world of Big Data, where the moment-by-moment behavior of human beings — analog resources — is tracked by sensors and engineered by central authorities to create optimal statistical outcomes. We might dismiss it as a warped science fiction fantasy if it weren’t also the utopian dream of the Max Levchins of the world. They have lots of money and they smell even more: ‘I believe that in the next decades we will see huge number of inherently analog processes captured digitally. Opportunities to build businesses that process this data and improve lives will abound.’ It’s the ultimate win-win: you get filthy rich by purifying the tribe.”

Commenting on the same address and on Carr’s critique, Alan Jacobs captures the Borg-ish nature of Levchin’s rhetoric:

“And Levchin makes his case for these exciting new developments in a very common way: ‘This is going to add a huge amount of new kinds of risks. But as a species, we simply must take these risks, to continue advancing, to use all available resources to their maximum.’ Levchin doesn’t really spell out what the risks are — though Nick Carr does — because he doesn’t want us to think about them. He just wants us to think about the absolutely unquestionable ‘must’ of using ‘all available resources to their maximum.’ That ‘advance’ in that direction might amount to ‘retreat’ in actual human flourishing does not cross his mind, because it cannot. Efficiency in the exploitation of ‘resources’ is his only canon.”

We “must” do this. It is inevitable but that this will happen. Resistance is futile.

Except that very often it isn’t. There are choices to be made. A figure no less identified with technological determinism than Marshall McLuhan reminded us: “There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.” The handwaving rhetoric that I’ve called a Borg Complex is resolutely opposed to just such contemplation, and very often for the worst of motives. By naming it my hope is that it can be more readily identified, weighed, and rejected.

Elsewhere McLuhan said, “the only alternative is to understand everything that is going on, and then neutralize it as much as possible, turn off as many buttons as you can, and frustrate them as much as you can … Anything I talk about is almost certainly to be something I’m resolutely against, and it seems to me the best way of opposing it is to understand it, and then you know where to turn off the button.”

Understanding is a form of resistance. So remember, carry on with the work of intelligent, loving resistance where discernment and wisdom deem it necessary.

_______________________________________________

This is a third in series of Borg Complex posts, you can read previous installments here.

I’ve set up a tumblr to catalog instances of Borg Complex rhetoric here. Submissions welcome.

Borg