The Borg Complex Case Files

UPDATE: See the Borg Complex primer here.

______________________________________

“Resistance is futile.” This is what the Borg, of Star Trek fame, announces to its victims before it proceeds to assimilate their biological and technological distinctiveness. It is also what many tech gurus and pundits announce to their audiences as they dispense their tech-guru-ish wisdom. They don’t quite use those words,of course, but they might as well. This is why I’ve taken to calling this sort of rhetoric a Borg Complex.

I first wrote about the Borg Complex last June in response to an article on technology and religion which confidently announced that “religion will have to adapt.” The line, “Resistance is futile,” could have unobtrusively made its way into the article at any number of places.

Using this same article as a specimen, I identified six tell-tale symptoms of a Borg Complex.

1. Makes grandiose, but unsupported claims for technology

2. Uses the term Luddite a-historically and as a casual slur

3. Pays lip service to, but ultimately dismisses genuine concerns

4. Equates resistance or caution to reactionary nostalgia

5. Starkly and matter-of-factly frames the case for assimilation

6. Announces the bleak future for those who refuse to assimilate

These symptoms may occur singly or in some combination, and they may range form milder to more hysterical manifestations. Symptoms of the Borg Complex also tend to present with a smug, condescending tone, but this is not always the case. Those who suffer from a Borg Complex may also exhibit an earnest, pleading tone or one that is mildly annoyed and incredulous.

As a more recent example of symptom number 2, consider Tim Wu writing in the NY Times about the response of some communities to apps that allow one to book cabs or rent out an apartment:  “But they’re considerably less popular among city regulators, whose reactions recall Ned Ludd’s response to the automated loom.” Clearly a bad thing in Wu’s view.

An interesting case of the Borg Complex was on display in a Huffington Post interview of Evernote CEO, Phil Libin. Libin is discussing Google Glasses when he says:

“I’ve used it a little bit myself and – I’m making a firm prediction – in as little as three years from now I am not going to be looking out at the world with glasses that don’t have augmented information on them. It’s going to seem barbaric to not have that stuff. That’s going to be the universal use case. It’s going to be mainstream. People think it looks kind of dorky right now but the experience is so powerful that you feel stupid as soon as you take the glasses off… We’re spending a good amount of time planning for and experimenting with those.”

“It’s going to seem barbaric to not have that stuff.” Here’s an instance of the Borg Complex that does not fit neatly within the symptoms described above. It’s some combination of 1, 5, and 6, but there is something more going on here. Context provides a little clarity though. This case of the Borg Complex is wrapped up in the potential sale of some future product. So the symptoms are inflected by the marketing motive. It is perhaps a more passive-aggressive form of the Borg Complex, “You will not want to be without __________________ because everyone else will have _________________ and you’ll feel inadequate without __________________.”

A more direct and intense variation of the Borg Complex was on display in Nathan Harden’s essay about the future of higher education. Here are the opening lines:

“In fifty years, if not much sooner, half of the roughly 4,500 colleges and universities now operating in the United States will have ceased to exist. The technology driving this change is already at work, and nothing can stop it.”

Harden sums up his introduction with the announcement, “The college classroom is about to go virtual.”

Kevin Kelly, a tech-guru par excellence and one of unbounded optimism, also exhibits Borg Complex symptoms in his much talked about essay for Wired, “Better Than Human” (the title, it is worth clarifying, was not chosen by Kelly). Early on Kelly writes,

“It may be hard to believe, but before the end of this century, 70 percent of today’s occupations will likewise be replaced by automation. Yes, dear reader, even you will have your job taken away by machines. In other words, robot replacement is just a matter of time.”

And perhaps it may be so. A diagnosis of Borg Complex does not necessarily invalidate the claims being made. The Borg Complex is less about the accuracy of predictions and claims than about the psychological disposition that leads one to make such claims and the posture toward technology in the present that it engenders.

The contrasts among Libin, Harden, and Kelly are also instructive. Libin’s case of Borg Complex is inflected by commercial considerations. I’m not sure the same can be said for either Harden or Kelly. This moves us beyond the work of identifying symptoms and leads us to consider the causes or sources of the Borg Complex. Libin’s case points in one plausible direction. In the case of Kelly, we might reasonably look to his philosophy of autonomous technology. But further consideration of causes will have to wait for a future post.

Until then, carry on with the work of intelligent, loving resistance were discernment and wisdom deem it necessary.

Borg

For Your Consideration – 9

It’s been a while since the last of these posts, so there’s some older stuff thrown in here. Older, of course, by web standards.

“What Turned Jaron Lanier Against the Web?”:

“Social lasers of cruelty?” I repeat.

“I just made that up,” Lanier says. “Where everybody coheres into this cruelty beam….Look what we’re setting up here in the world today. We have economic fear combined with everybody joined together on these instant twitchy social networks which are designed to create mass action. What does it sound like to you? It sounds to me like the prequel to potential social catastrophe. I’d rather take the risk of being wrong than not be talking about that.”

“Google Should Not Choose Right and Wrong”:

“Such technologies endorse a rather impoverished view of their human masters. Humans, no longer seen as citizens capable of deliberation, are treated as cogs in a system preoccupied with self-optimisation, as if the very composition of that system was uncontroversial.”

“Invasion of the Cyber Hustlers”:

“Cybertheorists in general could perhaps be tolerated as harmlessly colourful futurists, were it not that so many of them, through the influence of their consulting work and virtual bully pulpits, are right now engaged in promoting widespread cultural vandalism. Whatever smells mustily of the pre-digital age must be torn down, “disrupted” and made anew in the sacred image of Google and Apple, except more open to the digital probings of the internet- company oligopoly. Long live sharing, social reading, volunteering free labour as a peer student or member of a company’s online “community”, and entrusting your documents to the data-mining mega-corporations that control the “cloud”.”

“The human race: Prosthetics, doping, computer implants: we take every upgrade we can get. But what is waiting for us at the finish line?”:

“For some, perhaps, this is a consummation devoutly to be wished. But it also reveals the essentially religious nature of much singularity-style techno-futurism: such visions constitute an eschatology in which human beings finally sublime into the cybersphere. It is the silicon Rapture — and this reminds us that ‘to enhance’ once meant literally ‘to raise up’. This desire to become machinic implicitly betrays a hatred of the flesh as severe as that of self-flagellating religious ascetics. For the devout of singularity theory, the perfection of humanity is synonymous with its destruction.”

“The End of the Map”:

“But my favorite cartographic error is the Mountains of Kong, a range that supposedly stretched like a belt from the west coast of Africa through half the continent. It featured on world maps and atlases for almost the entire 19th century. The mountains were first sketched in 1798 by the highly regarded English cartographer James Rennell, a man already famous for mapping large parts of India.”

“The Riddle and the Gift: The Hobbit at Christmas”:

“On his death-bed, the dwarf king, Thorin commends Bilbo’s blend of courage and wisdom, adding, “if more of us valued food and cheer and song above hoarded gold, it would be a merrier world.” Food and cheer are transitory pleasures, which take their value from the moment and the company.”

“The Body Medium and Media Ecology: Disembodiment in the Theory and Practice of Modern Media” [PDF]:

“The body as medium and its disembodiment in the theory and practice of media is an imperative problem for media ecology.”

“Jerry Seinfeld Intends to Die Standing Up”:

“In his jokes he often arranges life’s messy confusions, shrewdly and immaculately, into a bouquet of trivial irritants. Seinfeld’s comedic persona is unflappable — annoyed plenty, but unmarked by extremes of emotion, much less tragedy.”

“Why Stephen Greenblatt is Wrong — and Why It Matters”:

“This is a powerful vision of the world entering a prolonged period of cultural darkness. If it were true, then Greenblatt’s second Swerve, the anti-religious polemic, also would deserve every award and plaudit it won. However, Greenblatt’s vision is not true, not even remotely.”

“Saying Goodbye to Now”:

“It’s an era of controlled deprivations and detoxification, of fasts and cleanses. Perhaps everyone should make a weekly ritual of twenty-four hours of undocumented life. Periods of time in which memory must do all the heavy lifting, or none of it, as it chooses, the consequences being what they may be. No phone, no eclipse glasses to mitigate the intensity of what lies before you. The only options are appetite, experience, memory, and later, if so inclined, writing it down.”

Suffering, Joy, and Incarnate Presence

“I have much to write you, but I do not want to do so with pen and ink. I hope to see you soon, and we will talk face to face.” With this, John closed the third New Testament epistle that bears his name. The letter is nearly 1,900 years old, yet the sentiment is entirely recognizable. In fact, many of us have likely expressed similar sentiments; only for us it was more likely an electronic medium that we preferred to forego in favor of face to face communication. There are things better said in person; and, clearly, this is not an insight stumbled upon by digital-weary interlocutors of the 21st century.

Yet, John did pen his letter. There were things the medium would not convey well, but he said all that could be said with pen and ink. He recognized the limits of the medium and used it accordingly, but he did not disparage the medium for its limits. Pen and ink were no less authentic, no less real, nor were they deemed unnatural. They were simply inadequate given whatever it was that John wanted to communicate. For that, the fullness of embodied presence was deemed necessary. It was, I think, a practical application of a theological conviction which John had elsewhere memorably articulated.

In the first chapter of his Gospel, John wrote, “The Word became flesh and made his dwelling among us.” It is a succinct statement of the doctrine of the incarnation, what Christians around the world celebrate at Christmas time. The work of God required the embodiment of divine presence. Words were not enough, and so the Word became flesh. He wept with those who mourned, he took the hand of those no others would touch, he broke bread and ate with outcasts, and he suffered. All of this required the fullness of embodied presence. John understood this, and it became a salient feature of his theology.

For my part, these thoughts have been passing in and out of mind inchoately and inarticulately since the Newtown shooting, and specifically as I thought about the responses to the shooting throughout our media environment. I was troubled by the urge to post some reaction to the shooting, but, initially, I don’t think I fully understood what troubled me. At first, it was the sense that I should say something, but I’ve come to believe that it was rather that I should say something.

Thinking about it as a matter of I saying something struck me as an unjustifiably self-indulgent. I still believe this to be part of the larger picture, but there was more. Thinking about it as a matter of I saying something pointed to the limitations of the media through which we have been accustomed to interacting with the world. As large as images loom on digital media, the word is still prominent. For the most part, if we are to interact with the world through digital media, we must use our words.

We know, however, that our words often fail us and prove inadequate in the face of the most profound human experiences, whether tragic, ecstatic, or sublime. And yet it is in those moments, perhaps especially in those moments, that we feel the need to exist (for lack of a better word), either to comfort or to share or to participate. But the medium best suited for doing so is the body, and it is the body that is, of necessity, abstracted from so much of our digital interaction with the world. With our bodies we may communicate without speaking. It is a communication by being and perhaps also doing, rather than by speaking.

Of course, embodied presence may seem, by comparison to its more disembodied counterparts, both less effectual and more fraught with risk. Embodied presence enjoys none of the amplification that technologies of communication afford. It cannot, after all, reach beyond the immediate place and time.  And it is vulnerable presence. Embodied presence involves us with others, often in unmanageable, messy ways that are uncomfortable and awkward. But that awkwardness is also a measure of the power latent in embodied presence.

Embodied presence also liberates us from the need to prematurely reach for rational explanation and solutions — for an answer. If I can only speak, then the use of words will require me to search for sense. Silence can contemplate the mysterious, the absurd, and the act of grace, but words must search for reasons and fixes. This is, in its proper time, not an entirely futile endeavor; but its time is usually not in the aftermath. In the aftermath of the tragic, when silence and “being with” and touch may be the only appropriate responses, then only embodied presence will do. Its consolations are irreducible. This, I think, is part of the meaning of the Incarnation: the embrace of the fullness of our humanity.

Words and the media that convey them, of course, have their place, and they are necessary and sometimes good and beautiful besides. But words are often incomplete, insufficient. We cannot content ourselves with being the “disincarnate users” of electronic media that McLuhan worried about, nor can we allow the assumptions and priorities of disincarnate media to constrain our understanding of what it means to be human in this world.

At the close of the second epistle that bears his name, John also wrote, “I have much to write to you, but I do not want to use paper and ink.” But in this case, he added one further clause. “Instead,” he continued, “I hope to visit you and talk with you face to face, so that our joy may be complete.” Joy completed. Whatever it might mean for our joy to be completed, it is a function of embodied presence with all of its attendant risks and limitations.

May your joy be complete.

Violence and Technology

There is this well known line from Wittgenstein’s Tractatus that reads, “Whereof one cannot speak, thereof one must be silent.” There is much wisdom in this, especially when one extends its meaning beyond what Wittgenstein intended (so far as I understand what he intended). We all know very well that words often fail us when we are confronted with unbearable sorrow or unmitigated joy. In the aftermath of the horror in Newtown, Connecticut, then, what could one say? Everything else seemed trivial.

I first heard of the shooting when I logged on to Twitter to post some frivolous comment, and, of course, I did not follow through. However, I then felt the need to post something — something appropriate, something with sufficient gravitas. But I asked myself why? Why should I feel the need to post anything? To what end? So that others may note that I responded to the tragedy with just the right measure of grace and seriousness? Or to self-righteously admonish others, implicitly of course, about their own failure to respond as I deemed appropriate?

When we become accustomed to living and thinking in public, the value of unseen action and unshared thoughts is eclipsed. “I should be silent,” a part of us may acknowledge, but then in response, a less circumspect voice within us wonders, “But how will anyone know that I am being silent? A hashtag perhaps, #silent?”

I felt just then, with particular force, the stunning degree of self-indulgence invited by social media. But then, of course, I had to reckon with the fact that the well of self-indulgence tapped by social media springs from no other source but myself.

There is only one other point that I want to consider. Within my online circles, many have sought to challenge the slogan “Guns don’t kill people,” and they have done so based on premises which I am generally inclined to support. I have myself associated the technological neutrality position with this slogan, and I have found it an inadequate position. Guns, like other technologies, yield a causal force independent of the particular uses to which they are put. They enter actively and with consequence into our perception and experience of the world. This, I continue to believe, is quite true.

Several months ago, in the wake of another tragic shooting, Evan Selinger wrote a well-considered piece on this very theme and I encourage you to read it: “The Philosophy of the Technology of the Gun.”

Less effectively, in my view, but thoughtfully still, PJ Rey revisited Zeynep Tufekci’s appropriation of Aristotle’s categories of causality to frame the gun as the material cause of acts of violence. The argument here is also against technological neutrality, I’m just not entirely sure that Aristotle’s categories are fully understood by Rey or Tufekci (which is not to say that I fully understand them). The material cause is not “that without which,” but “that out of which.” But then again, I put Wittgenstein’s dictum to my own uses; I suppose Aristotle too can be used suggestively, if not rigorously. Maybe.

Thus far, I’ve been sympathetic to the claims advanced, but there is latent in these considerations (but not necessarily in the thinking of these authors) an opposite error that I’ve also seen expressed explicitly and forcefully. Last night, I caught the following comment on Twitter from Prof. Lance Strate. Strate is a respected media ecologist and I have in the past appreciated his insights and commentary. I was, however, stopped short by this tweet:

I want to make all the requisite acknowledgements here. It is a tweet, after all, and the medium is not conducive to nuance. Nor is one required to say everything one thinks about a matter whenever one speaks of that matter. And, in fairness to Strate, I also want to provide a link to his fuller discussion of the situation on his blog, “On Guns and More,” much of which I would agree with.

That said, “Surely the blame is also on him,” was my initial response to this tweet. Again, I want to read generously, particularly in a medium that is given to misunderstanding. I don’t know that Strate meant to recuse the shooter of all responsibility; in fact, I have to believe such was not the case. But this comment reminded me that in our efforts to critique the neutrality of technology position, we need to take care less we end up endorsing, in my view, more pernicious errors of judgment.

Thinking again about the manner in which a gun enters into our phenomenological experience, it is true to say that a gun wants to be shot. But this does not say everything there is to say; it doesn’t even say the most important and relevant things that could be said. Why is it, at times, not shot at all? Further, to say it wants to be shot is not yet to say what it will be shot at or why? We cannot dismiss the other forms of causality that come into play. If Aristotle is to be invoked, after all,  it should be acknowledged that he privileged final causation whenever possible.

Interestingly, in his illustration of the four causes –the making of a bronze statue — Aristotle did not take the craftsman to be the best example of an efficient cause. It was instead the knowledge the craftsman possessed that best illustrated the efficient cause. If we apply this analogously onto the present case, it suggests that knowledge of how to inflict violence is the efficient cause. And this reminds us, disturbingly, of what is latent in all of us.

It reminds me as well of some other well known lines, not from Wittgenstein this time, but from Solzhenitsyn: “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being. And who is willing to destroy a piece of his own heart?”

Kyrie Eleison.

The Conscience of a Machine

Recently, Gary Marcus predicted that within the next two to three decades we would enter an era “in which it will no longer be optional for machines to have ethical systems.” Marcus invites us to imagine the following driverless car scenario: “Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk?”

In this scenario, a variation of the trolley car problem, the computer operating the car would need to make a decision (although I suspect putting it that way is an anthropomorphism). Were a human being called upon to make such a decision, it would be considered a choice of moral consequence. Consequently, writing about Marcus’ piece, Nicholas Carr concluded, “We don’t even really know what a conscience is, but somebody’s going to have to program one nonetheless.”

Of course, there is a sense in which autonomous machines of this sort are not really ethical agents. To speak of their needing a conscience strikes me as a metaphorical use of language. The operation of their “conscience” or “ethical system” will not really resemble what has counted as moral reasoning or even moral intuition among human beings. They will do as they are programmed to do. The question is, What will they be programmed to do in such circumstances? What ethical system will animate the programming decisions? Will driverless cars be Kantians, obeying one rule invariably; or will they be Benthamites, calculating the greatest good for the greatest number?

There is an interesting sense, though, in which an autonomous machine of the sort envisioned in these scenarios is an agent, even if we might hesitate to call it an ethical agent. What’s interesting is not that a machine may cause harm or even death. We’ve been accustomed to this for generations. But in such cases, a machine has ordinarily malfunctioned, or else some human action was at fault. In the scenarios proposed by Marcus, an action that causes harm would be the result of a properly functioning machine and it would have not been the result of direct human action. The machines decided to take an action that resulted in harm, even if it was in some sense the lesser harm. In fact, such machines might rightly be called the first truly malfunctioning machines.

There is little chance that our world will not one day be widely populated by autonomous machines of the sort that will require a “conscience” or “ethical systems.” Determining what moral calculus should inform such “moral machines,” is problematic enough. But there is another, more subtle danger that should concern us.

Such a machine seems to enter into the world of morally consequential action that until now has been occupied exclusively by human beings, but they do so without a capacity to be burdened by the weight of the tragic, to be troubled by guilt, or to be held to account in any sort of meaningful and satisfying way. They will, in other words, lose no sleep over their decisions, whatever those may be.

We have an unfortunate tendency to adapt, under the spell of metaphor, our understanding of human experience to the characteristics of our machines. Take memory for example. Having first decided, by analogy, to call a computer’s capacity to store information “memory,” we then reversed the direction of the metaphor and came to understand human memory by analogy to computer “memory,” i.e., as mere storage. So now we casually talk of offloading the work of memory or of Google being a better substitute for human memory without any thought for how human memory is related to perception, understanding, creativity, identity, and more.

I can too easily imagine a similar scenario wherein we get into the habit of calling the algorithms by which machines are programmed to make ethically significant decisions the machine’s “conscience,” and then turn around, reverse the direction of the metaphor, and come to understand human conscience by analogy to what the machine does. This would result in an impoverishment of the moral life.

Will we then begin to think of the tragic sense, guilt, pity, and the necessity of wrestling with moral decisions as bugs in our “ethical systems”? Will we envy the literally ruth-less efficiency of “moral machines”? Will we prefer the Huxleyan comfort of a diminished moral sense, or will we claim the right to be unhappy, to be troubled by fully realized human conscience?

This is, of course, not merely a matter of making the “right” decisions. Part of what makes programming “ethical systems” troublesome is precisely our inability to arrive at a consensus about what is the right decision in such cases. But even if we could arrive at some sort of consensus, the risk I’m envisioning would remain. The moral weightiness of human existence does not reside solely in the moment of decision, it extends beyond the moment to a life burdened by the consequences of that action. It is precisely this “living with” our decisions that a machine conscience cannot know.

In Miguel Unamuno’s Tragic Sense of Life, he relates the following anecdote: “A pedant who beheld Solon weeping for the death of a son said to him, ‘Why do you weep thus, if weeping avails nothing?’ And the sage answered him, ‘Precisely for that reason–because it does not avail.'”

Were we to conform our conscience to the “conscience” of our future machines, we would cease to shed such tears, and our humanity lies in Solon’s tears.

_______________________________________________

Also consider Evan Selinger’s excellent and relevant piece, “Would Outsourcing Morality to Technology Diminish Our Humanity?”