Are Human Enhancement and AI Incompatible?

A few days ago, in a post featuring a series of links to stories about new and emerging technologies, I included a link to a review of Nick Bostrom’s new book, Superintelligence: Paths, Dangers, Strategies. Not long afterwards, I came across an essay adapted from Bostrom’s book on Slate’s “Future Tense” blog. The excerpt is given the cheerfully straightforward title, “You Should Be Terrified of Super Intelligent Machines.”

I’m not sure that Bostrom himself would put quite like that. I’ve long thought of Bostrom as one of the more enthusiastic proponents of a posthumanist vision of the future. Admittedly, I’ve not read a great deal of his work (including this latest book). I first came across Bostrom’s name in Cary Wolfe’s What Is Posthumanism?, which led me to Bostrom’s article, “A History of Transhumanist Thought.”

For his part, Wolfe sought to articulate a more persistently posthumanist vision for posthumanism, one which dispensed with humanist assumptions about human nature altogether. In Wolfe’s view, Bostrom was guilty of building his transhumanist vision on a thoroughly humanist understanding of the human being. The humanism in view here, it’s worth clarifying, is that which we ordinarily associate with the Renaissance or the Enlightenment, one which highlights autonomous individuality, agency, and rationality. It is also one which assumes a Platonic or Cartesian mind/body dualism. Wolfe, like N. Katherine Hayles before him, finds this to be misguided and misleading, but I digress.

Whether Bostrom would’ve chosen such an alarmist title or not, his piece does urge us to lay aside the facile assumption that super-intelligent machines will be super-intelligent in a predictably human way. This is an anthropomorphizing fallacy. Consequently, we should consider the possibility that super-intelligent machines will pursue goals that may, as an unintended side-effect, lead to human extinction. I suspect that in the later parts of his book, Bostrom might have a few suggestions about how we might escape such a fate. I also suspect that none of these suggestions include the prospect of halting or limiting the work being done to create super-intelligent machines. In fact, judging from the chapter titles and sub-titles, it seems that the answer Bostrom advocates involves figuring out how to instill appropriate values in super-intelligent machines. This brings us back to the line of criticism articulated by Wolfe and Hayles: the traditionally humanist project of rational control and mastery is still the underlying reality.

It does seem reasonable for Bostrom, who is quite enthusiastic about the possibilities of human enhancement, to be a bit wary about the creation of super-intelligent machines. It would be unfortunate indeed if, having finally figured out how to download our consciousness or perfect a cyborg platform for it, a clever machine of our making later came around, pursuing some utterly trivial goal, and decided, without a hint of malice, that it needed to eradicate these post-human humans as a step toward the fulfillment of its task. Unfortunate, and nihilistically comic.

It is interesting to consider that these two goals we rather blithely pursue–human enhancement and artificial intelligence–may ultimately be incompatible. Of course, that is a speculative consideration, and, to some degree, so is the prospect of ever achieving either of those two goals, at least as their most ardent proponents envision their fulfillment. But let us consider it for just a moment anyway for what it might tell us about some contemporary versions of the posthumanist hope.

Years ago, C.S. Lewis famously warned that the human pursuit of mastery over Nature would eventually amount to the human pursuit of mastery over Humanity, and what this would really mean is the mastery of some humans over others. This argument is all the more compelling now, some 70 or so years after Lewis made it in The Abolition of Man. It would seem, though, that an updated version of that argument would need to include the further possibility that the tools we develop to gain mastery over nature and then humanity might finally destroy us, whatever form the “us” at that unforeseeable juncture happens to take. Perhaps this is the tacit anxiety animating Bostrom’s new work.

And this brings us back, once again, to the kind of humanism at the heart of posthumanism. The posthumanist vision that banks on some sort of eternal consciousness–the same posthumanist vision that leads Ray Kurzweil to take 150 vitamins a day–that posthumanist vision is still the vision of someone who intends to live forever in some clearly self-identifiable form. It is, in this respect, a thoroughly Western religious project insofar as it envisions and longs for the immortality of the individuated self. We might even go so far as to call it, in an obviously provocative move, a Christian heresy.

Finally, our potentially incompatible technical aspirations reveal something of the irrationality, or a-rationality if you prefer, at the heart of our most rational project. Technology and technical systems assume rationality in their construction and their operation. Thinking about their potential risks and trying to prevent and mitigate them is also a supremely rational undertaking. But at the heart of all of this rational work there is a colossal unspoken absence: there is a black hole of knowledge that, beginning with the simple fact of our inability to foresee the full ramifications of anything that we do or make, subsequently sucks into its darkness our ability to expertly anticipate and plan and manage with anything like the confident certainty we project.

It is one thing to live with this relative risk and uncertainty when we are talking about simple tools and machines (hammers, bicycles, etc.). It is another thing when we are talking about complex technical systems (automotive transportation, power grids, etc.). It is altogether something else when we are talking about technical systems that may fundamentally alter our humanity or else eventuate in its annihilation. The fact that we don’t even know how seriously to take these potential threats, that we cannot comfortably distinguish between what is still science fiction and what will, in fact, materialize in our lifetimes, that’s a symptom of the problem, too.

I keep coming back to the realization that our thinking about technology is often inadequate or ineffectual because it is starting from the wrong place; or, to put it another way, it is already proceeding from assumptions grounded in the dynamics of technology and technical systems, so it bends back toward the technological solution. If we already tacitly value efficiency, for example, if efficiency is already an assumed good that no longer needs to be argued for, then we will tend to pursue it by whatever possible means under all possible circumstances. Whenever new technologies appear, we will judge them in light of this governing preference for efficiency. If the new technology affords us a more efficient way of doing something, we will tend to embrace it.

But the question remains, why is efficiency a value that is so pervasively taken for granted? If the answer seems commonsensical, then, I’d humbly suggest that we need to examine it all the more critically. Perhaps we will find that we value efficiency because this virtue native to the working of technical and instrumental systems has spilled over into what had previously been non-technical and non-instrumental realms of human experience. Our thinking is thus already shaped (to put it in the most neutral way possible) by the very technical systems we are trying to think about.

This is but one example of the dynamic. Our ability to think clearly about technology will depend in large measure on our ability to extricate our thinking from the criteria and logic native to technological systems. This is, I fully realize, a difficult task. I would never claim that I’ve achieved this clarity of thought myself, but I do believe that our thinking about technology depends on it.

There’s a lot more to be said, but I’ll leave it there for now. Your thoughts, as always, are welcome.

A Few Items for Your Consideration

Here are a few glimpses of the future ranging from the near and plausible, to the distant and uncertain. In another world–one, I suppose, in which I get paid to write these posts–I’d write more about each. In this world, I simply pass them along for your consideration.

Google Glass App Reads Your Emotions

“A new Glassware App for Google Glass will uncover a person’s emotion, age range and gender just by facial recognition technology ….

Facial recognition has always been seen with nervousness, as people tend to prefer privacy over the ability to see a stranger’s age or gender. But these two apps prove sometimes letting a robot know you’re sad can help for a better relationship between fellow humans. Letting the robot lead has proven to increase human productivity and better the ebb and flow of a work space, a partnership, any situation dealing with human communication.

The SHORE app is currently not available for download, but you can try US+ now. May the robots guide us to a more humane future.”

GM Cars to Monitor Drivers

“General Motors, the largest US auto manufacturer by sales, is preparing to launch the world’s first mass-produced cars with eye- and head-tracking technology that can tell whether drivers are distracted, according to people with knowledge of the plans ….

The company is investing in technology that will be able to tell how hard a driver is thinking by monitoring the dilation of the pupils, and combines facial information with sensors for vital signs such as blood alcohol levels and heart rate.”

Electrical Brain Stimulation

“Transcranial direct current stimulation (TDCS), which passes small electrical currents directly on to the scalp, stimulates the nerve cells in the brain (neurons). It’s non-invasive, extremely mild and the US military even uses TDCS in an attempt to improve the performance of its drone pilots.

The idea is that it makes the neurons more likely to fire and preliminary research suggests electrical simulation can improve attention as well as have a positive impact on people with cognitive impairments and depression ….

And more worryingly for him, people are also increasingly making brain stimulation kits themselves. This easily ‘puts the technology in the realms of clever teenagers,’ adds Dr Davis.

An active forum on reddit is devoted to the technology, and people there have complained of ‘burning to the scalp’. Another user wrote that they ‘seemed to be getting angry frequently’ after using TDCS.”

Preparing for Superintelligent AI

“Bostrom takes a cautious view of the timing but believes that, once made, human-level AI is likely to lead to a far higher level of ‘superintelligence’ faster than most experts expect – and that its impact is likely either to be very good or very bad for humanity.

The book enters more original territory when discussing the emergence of superintelligence. The sci-fi scenario of intelligent machines taking over the world could become a reality very soon after their powers surpass the human brain, Bostrom argues. Machines could improve their own capabilities far faster than human computer scientists.”

We’ve got some thinking to do, folks, careful, patient thinking. Happily, we don’t have to do that thinking alone and in isolation. Here is Evan Selinger helping us think clearly about our digital tools with his usual, thoughtful analysis: “Why Your Devices Shouldn’t Do the Work of Being You.”

Here, too, is a critical appraisal of the religiously intoned hopes of the cult of the Singularity.

Finally, Nick Carr invites us to cautiously consider the potential long-term consequences of the recently unveiled Apple Watch since “never before have we had a tool that promises to be so intimate a companion and so diligent a monitor as the Apple Watch.”

Waiting for Socrates … So We Can Kill Him Again and Post the Video on Youtube

It will come as no surprise, I’m sure, if I tell you that the wells of online discourse are poisoned. It will come as no surprise because critics have complained about the tone of online discourse for as long as people have interacted with one another online. In fact, we more or less take the toxic, volatile nature of online discourse for granted. “Don’t read the comments” is about as routine a piece of advice as “look both ways before crossing the street.” And, of course, it is also true that complaints about the coarsening of public discourse in general have been around for a lot longer than the Internet and digital media.

That said, I’ve been intrigued, heartened actually, by a recent round of posts bemoaning the state of online rhetoric from some of the most thoughtful people whose work I follow. Here is Freddie deBoer lamenting the rhetoric of the left, and here is Matthew Anderson noting much of the same on the right. Here is Alan Jacobs on why he’s stepping away from Twitter. Follow any of those links and you’ll find another series of links to thoughtful, articulate writers all telling us, more or less, that they’ve had enough. This piece urges civility and it suggests, hopefully (naively?), that the “Internet” will learn soon enough to police itself, but the evidence it cites along the way seems rather to undermine such hopefulness. I won’t bother to point you to some of the worst of what I’ve regrettably encountered online in recent weeks.

Why is this the case? Why, as David Sessions recently put it, is the state of the Internet awful?

Like everyone else, I have scattered thoughts about this. For one thing, the nature of the medium seems to encourage rancor, incivility, misunderstanding, and worse. Anonymity has something to do with this, and so does the abstraction of the body from the context of communication.

Along the same media-ecological lines, Walter Ong noted that oral discourse tends to be agonistic and literate discourse tends to be irenic. Online discourse tends to be conducted in writing, which might seem to challenge Ong’s characterization. But just as television and radio constituted what Ong called secondary orality, so might we say that social media is a form of secondary literacy, blurring the distinctions between orality and literacy. It is text based, but, like oral discourse, it brings people into a context of relative communicative immediacy. That is to say that through social media people are responding to one another in public and in short order, more as they would in a face-to-face encounter, for example, than in private letters exchanged over the course of months.

In theory, writing affords us the temporal space to be more thoughtful and precise in expressing our ideas, but, in practice, the expectations of immediacy in digital contexts collapse that space. So we lose the strengths of each medium: we get none of the meaning-making cues of face-to-face communication nor any of the time for reflection that written communication ordinarily grants. The media context, then, ends up being rife with misunderstanding and agonistic; it encourages performative pugilism.

Also, as the moral philosopher Alasdair MacIntyre pointed out some time ago, we no longer operate with a set of broadly shared assumptions about what is good and what shape a good life should take. Our ethical reasoning tends not to be built on the same foundation. Because we are reasoning from incompatible moral premises, the conclusions reached by two opposing parties tend to be interpreted as sheer stupidity or moral obtuseness. In other words, because our arguments, proceeding as they do from such disparate moral frameworks, fail to convince and persuade, we begin to assume that those who will not yield to our moral vision must thus be fools or worse. Moreover, we conclude, fools and miscreants cannot be argued with; they can only be shamed, shouted down, or otherwise silenced.

Digital dualism is also to blame. Some people seem to operate under the assumption that they are not really racists, misogynists, anti-Semites, etc.–they just play one on Twitter. It really is much too late in the game to play that tired card.

Perhaps, too, we’ve conflated truth and identity in such a way that we cannot conceive of a challenge to our views as anything other than a challenge to our humanity. Conversely, it seems that in some highly-charged contexts being wrong can cost you the basic respect one might be owed as a fellow human being.

Finally, the Internet is awful because, frankly, people are awful. We all are; at least we all can be under the right circumstances. As Solzhenitsyn put it, “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”

To that list, I want to offer just one more consideration: a little knowledge is a dangerous thing and there are few things the Internet does better than giving everyone a little knowledge. A little knowledge is a dangerous thing because it is just enough to give us the illusion of mastery and a sense of authority. This illusion, encouraged by the myth of having all the world’s information at our finger tips, has encouraged us to believe that by skimming an article here or reading the summary of a book there we thus become experts who may now liberally pontificate about the most complex and divisive issues with unbounded moral and intellectual authority. This is the worst kind of insufferable foolishness, that which mistakes itself for wisdom without a hint of irony.

Real knowledge on the other hand is constantly aware of all that it does not know. The more you learn, the more you realize how much you don’t know, and the more hesitant you’ll be to speak as if you’ve got everything figured out. Getting past that threshold of “a little knowledge” tends to breed humility and create the conditions that make genuine dialogue possible. But that threshold will never be crossed if all we ever do is skim the surface of reality, and this seems to be the mode of engagement encouraged by the information ecosystem sustained by digital media.

We’re in need of another Socrates who will teach us once again that the way of wisdom starts with a deep awareness of our own ignorance. Of course, we’d kill him too, after a good skewering on Twitter, and probably without the dignity of hemlock. A posthumous skewering would follow, naturally, after the video of his death got passed around on Reddit and Youtube.

I don’t want to leave things on that cheery note, but the fact is that I don’t have a grand scheme for making online discourse civil, informed, and thoughtful. I’m pretty sure, though, that things will not simply work themselves out for the better without deliberate and sustained effort. Consider how W.H. Auden framed the difference between traditional cultures and modernity:

“The old pre-industrial community and culture are gone and cannot be brought back. Nor is it desirable that they should be. They were too unjust, too squalid, and too custom-bound. Virtues which were once nursed unconsciously by the forces of nature must now be recovered and fostered by a deliberate effort of the will and the intelligence. In the future, societies will not grow of themselves. They will be either made consciously or decay.”

For better or worse, or more likely both, this is where we find ourselves–either we deploy deliberate effort of will and intelligence or face perpetual decay. Who knows, maybe the best we can do is to form and maintain enclaves of civility and thoughtfulness amid the rancor, communities of discourse where meaningful conversation can be cultivated. These would probably remain small communities, but their success would be no small thing.

__________________________________

Update: After publishing, I read Nick Carr’s post on the revival of blogs and the decline of Big Internet. “So, yeah, I’m down with this retro movement,” Carr writes, “Bring back personal blogs. Bring back RSS. Bring back the fun. Screw Big Internet.” I thought that was good news in light of my closing paragraph.

And, just in case you need more by way of diagnosis, there’s this: “A Second Look At The Giant Garbage Pile That Is Online Media, 2014.”

Our Little Apocalypses

An incoming link to my synopsis of Melvin Kranzberg’s Six Laws of Technology alerted me to a short post on Quartz about a new book by an author named Michael Harris. The book, The End of Absence: Reclaiming What We’’ve Lost in a World of Constant Connection, explores the tradeoffs induced by the advent of the Internet. Having not read the book, I obviously can’t say much about it, but I was intrigued by one angle Harris takes that comes across in the Quartz piece.

Harris’s book is focused on the generation, a fuzzy category to be sure, that came of age just before the Internet exploded onto the scene in the early 90s. Here’s Harris:

“If you were born before 1985, then you know what life is like both with the internet and without. You are making the pilgrimage from Before to After.”

“If we’re the last people in history to know life before the internet, we are also the only ones who will ever speak, as it were, both languages. We are the only fluent translators of Before and After.”

It would be interesting to read what Harris does with this framing. In any case, it’s something I’ve thought about often. This is my fifteenth year teaching. Over the years I’ve noticed, with each new class, how the world that I knew as a child and as a young adult recedes further and further into the murky past. As you might guess, digital technology has been one of the most telling indicators.

Except for a brief flirtation with Prodigy on an MS-DOS machine with a monochrome screen, the Internet did not come into my life until I was a freshman in college. I’m one of those people Harris is writing about, one of the Last Generation to know life before the Internet. Putting it that way threatens to steer us into a rather unseemly romanticism, and, knowing that I’m temperamentally drawn to dying lights, I want to make sure I don’t give way to it. That said, it does seem to me that those who’ve known the Before and After, as Harris puts it, are in a unique position to evaluate the changes. Experience, after all, is irreducible and incommunicable.

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting that every new technology elicits criticism and evokes fear, society always survives the so-called moral panic or techno-panic, and thus concluding, QED, that those critiques and fears, including those being presently expressed, are always misguided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it features as the tenth of my unsolicited points of advice to tech writers.

Now while it is true, as Adam Thierer has noted here, that we should try to understand how societies and individuals have come to cope with or otherwise integrate new technologies, it is not the case that such negotiated settlements are always unalloyed goods for society or for individuals. But this line of argument is compelling to the degree that living memory of what has been displaced has been lost. I may know at an intellectual level what has been lost, because I read about it in a book for example, but it is another thing altogether to have felt that loss. We move on, in other words, because we forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves–they were always someone else’s problem.

To be very clear and to avoid the pedantic, sanctimonious reply–although, in all honesty, I’ve gotten so little of that on this blog that I’ve come to think that a magical filter of civility vets all those who come by–let me affirm that yes, of course, I certainly would’ve made many trade-offs along the way, too. To recognize costs and losses does not mean that you always refuse to incur them, it simply means that you might incur them in something other than a naive, triumphalist spirit.

Around this time last year, an excerpt from Jonathan Franzen’s then-forthcoming edited work on Karl Krauss was published in the Guardian; it was panned, frequently and forcefully, and deservedly so in some respects. But the conclusion of the essay struck me then as being on to something.

“Maybe … apocalypse is, paradoxically, always individual, always personal,” Franzen wrote,

“I have a brief tenure on earth, bracketed by infinities of nothingness, and during the first part of this tenure I form an attachment to a particular set of human values that are shaped inevitably by my social circumstances. If I’d been born in 1159, when the world was steadier, I might well have felt, at fifty-three, that the next generation would share my values and appreciate the same things I appreciated; no apocalypse pending.”

But, of course, he wasn’t. He was born in the modern world, like all of us, and this has meant change, unrelenting change. Here is where the Austrian writer Karl Kraus, whose life straddled the turn of the twentieth century, comes in: “Kraus was the first great instance of a writer fully experiencing how modernity, whose essence is the accelerating rate of change, in itself creates the conditions for personal apocalypse.” Perhaps. I’m tempted to quibble with this claim. The words of John Donne, “Tis all in pieces, all coherence gone,” come to mind. Yet, even if Franzen is not quite right about the historical details, I think he’s given honest voice to a common experience of modernity:

“The experience of each succeeding generation is so different from that of the previous one that there will always be people to whom it seems that the key values have been lost and there can be no more posterity. As long as modernity lasts, all days will feel to someone like the last days of humanity. Kraus’s rage and his sense of doom and apocalypse may be the anti-thesis of the upbeat rhetoric of Progress, but like that rhetoric, they remain an unchanging modality of modernity.”

This is, perhaps, a bit melodramatic, and it is certainly not all that could be said on the matter, or all that should be said. But Franzen is telling us something about what it feels like to be alive these days. It’s true, Franzen is not the best public face for those who are marginalized and swept aside by the tides of technological change, tides which do not lift all boats, tides which may, in fact, sink a great many. But there are such people, and we do well to temper our enthusiasm long enough to enter, so far as it is possible, into their experience. In fact, precisely because we do not have a common culture to fall back on, we must work extraordinarily hard to understand one another.

Franzen is still working on the assumption that these little personal apocalypses are a generational phenomenon. I’d argue that he’s underestimated the situation. The rate of change may be such that the apocalypses are now intra-generational. It is not simply that my world is not my parents’ world; it is that my world now is not what my world was a decade ago. We are all exiles now, displaced from a world we cannot reach because it fades away just as its contours begin to materialize. This explains why, as I wrote earlier this year, nostalgia is not so much a desire for a place or a time as it is a desire for some lost version of ourselves. We are like Margaret, who in Hopkins’ poem, laments the passing of the seasons, Margaret to whom the poet’s voice says kindly, “It is Margaret you mourn for.”

Although I do believe that certain kinds of change ought to be resisted–I’d be a fool not to–none of what I’ve been trying to get at in this post is about resisting change in itself. Rather, I think all I’ve been trying to say is this: we must learn to take account of how differently we experience the changing world so that we might best help one another as we live through the change that must come. That is all.

Preserving the Person in the Emerging Kingdom of Technological Force

What does Iceland look like through Google Glass? Turns out it looks kind of like Iceland. Consider this stunning set of photographs showcasing a tool built by Silica Labs which allows users to post images from Glass directly onto their WordPress blog. If you click over to see the images, you’ll notice two things. First, you’ll see that Iceland is beautiful, something you may already have known. Secondly, you’ll see that pictures taken with Glass look, well, just like pictures not taken with Glass.

There’s one exception to that second observation. When the user’s hands appear in the frame, the POV perspective becomes evident. Apart from that, these great pictures look just like every other set of great pictures. This isn’t a knock on the tool developed by Silica Labs, by the way. I’m not really interested in that particular app. I’m interested in the appeal of Glass and how users understand their experience with Glass, and these pictures, not markedly different from what you could produce without Glass, suggested a thesis: perhaps the appeal of Glass has less to do with what it enables you to do than it does with the way you feel when you’re doing it. And, as it turns out, there is recurring theme in how many early adopters described their experience of Glass that seems to support this thesis.

As Glass started making its first public appearances, reviewers focused on user experience; and their criticism typically centered on the look of Glass, which was consistently described as geeky, nerdy, pretentious, or silly. Clearly, Glass had an image problem. But soon the conversation turned to the experience of those in the vicinity of a Glass user. Mark Hurst was one of the first to redirect our attention in this direction: “The most important Google Glass experience is not the user experience – it’s the experience of everyone else.” Hurst was especially troubled by the ease with which Glass can document others and the effects this would have on the conduct of public life.

Google was sensitive to these concerns, and it quickly assured the public that the power of Glass to record others surreptitiously had been greatly exaggerated. A light would indicate when Glass was activated so others would know if they were being recorded and the command to record would be audible. Of course, it didn’t take long to circumvent these efforts to mitigate Glass’s creep factor. Without much regard for Google’s directives, hackers created apps that allowed users to take pictures merely by winking. Worse yet, an app that equipped Glass with face-recognition capabilities soon followed.

Writing after the deployment of these hacks, David Pogue echoed Hurst’s earlier concerns: “the biggest obstacle [facing Glass] is the smugness of people who wear Glass—and the deep discomfort of everyone who doesn’t.” After laying out his tech-geek bona fides, even Nick Bilton confessed his unease around people wearing Glass: “I felt like a mere mortal among an entirely different class of super-connected humans.” The defining push back against this feeling Glass engenders in others came from Adrian Chen who proclaimed unequivocally, “By donning Google Glass, you, the Google Glass user, are volunteering to be a foot soldier in Google’s asshole army.”

Hurst was on to something. He was right to direct attention to the experience of those in the vicinity of a Glass user (or, Glassholes, as they have been affectionately called by some). But it’s worth pivoting back to the experience of the Glass user. Set aside ergonomics, graphic interfaces, and design questions for a moment, though, and consider what users report feeling when they use Google Glass.

Let’s start with Evernote CEO Phil Libin. In a Huffington Post interview late in 2012, he claimed that “in as little as three years” it will seem “barbaric” not to use Google Glass. That certainly has a consciously hyperbolic ring to it, but it’s the follow-up comment that’s telling: ”People think it looks kind of dorky right now but the experience is so powerful that you feel stupid as soon as you take the glasses off…”

“The experience is so powerful” – there it is. Glass lets you check the Internet, visualize information in some interesting ways, send messages, take pictures, and shoot video. I’m sure I’m missing something, but none of those are in themselves groundbreaking or revolutionary. Clearly, though, there’s something about having all of this represented for the user as part of their perceptual apparatus that conveys a peculiar sense of empowerment.

Sergey Brin, co-founder of Google appearPhilbin was not the only one to report this feeling of power. Robert Scoble declared, “I will never live another day without wearing Google Glass or something like it. They have instantly become part of my life.” “The human body has a lot of limitations,” software developer Monica Wilkinson explained, “I see [Glass] as a way to enhance our bodies.” Writing about his Glass experience on The Verge, Joshua Topolsky was emphatic: “I won’t lie, it’s amazingly powerful (and more than a little scary) to be able to just start recording video or snapping pictures with a couple of flicks of your finger or simple voice commands.” A little further on he added, “In the city, Glass make you feel more powerful, better equipped, and definitely less diverted.” Then there’s Chris Barrett who captured the first arrest on Glass. Barrett witnessed a fight and came in close to film the action. He acknowledged that if he were not wearing Glass, he would not have approached the scene of the scuffle. Finally, there’s all that is implicit in the way Sergey Brin characterized the smartphone as he was introducing Glass: “It’s kind of emasculating.” Glass, we are to infer, addresses this emasculation by giving the user a sense of power. Pogue put it most succinctly: Glass puts its wearers in “a position of control.”

It is possible to make too much of these statements. Other have found that using Glass makes them feel self-conscious in public and awkward in interactions with others. But Glass has revealed to a few intrepid souls something of its potential power, and, if they’re to be trusted, the feeling has been intoxicating. But why is this?

Perhaps this is because the prosthetic effect is especially seamless, so that it feels as if you yourself are doing the things Glass enables rather than using a tool to accomplish them. When a tool works really well it doesn’t feel like your using a tool, it feels like you are acting through the tool. Glass seems to take it a step further. You are not just acting through Glass; you are simply acting. You, by your gestures or voice commands, are doing these things. Even the way audio is received from Glass contributes to the effect. Here’s how Gary Shteyngart described the way it feels to hear using Glass’s bone transducer: “The result is eerie, as if someone is whispering directly into a hole bored into your cranium, but also deeply futuristic.” That sounds to me as if you are hearing audio in the way that we might imagine “hearing” telepathy.

In other words, there is an alluring immediacy to the experience of interacting with the world through Google Glass. This seamlessness, this way that Glass has of feeling like a profound empowerment recalls nothing so much as the link between magic and technology so aptly captured in Arthur Clarke’s famous third law: ”Any sufficiently advanced technology is indistinguishable from magic.” Clarke’s pithy law recalls a fundamental, and historical, connection between magic and technology: they are both about power. As Lewis Mumford put it in Technic and Civilization, “magic was the bridge that united fantasy with technology: the dream of power with the engines of fulfillment.” Or consider how C. S. Lewis formulated the relationship: “For magic and applied science alike the problem is how to subdue reality to the wishes of men: the solution is a technique.” Sociologist Richard Stivers has concluded, “Without magic, technology would have no fatal sway over us.”

So it turns out that the appeal of Glass, for all of its futuristic cyborg pretensions, may be anchored in an ancient desire that has long animated the technological project: the desire for the experience of power. And, privacy concerns aside, this may be the best reason to be wary of the device. Those who crave the feel of power—or who having tasted it, become too enamored of it—tend not to be the sort of people with whom you want to share a society.

It is also worth noting what we might call a pervasive cultural preparation for the coming of Glass. In another context, I’ve claimed that the closest analogy to the experience of the world through Google Glass may be the experience of playing a first-person video game. To generation that has grown up playing first person shooters and role-playing video games, Glass promises to make the experience of everyday life feel more like the experience of playing a game. In a comment on my initial observations, Nick Carr added, “You might argue that this reversal is already well under way in warfare. Video war games originally sought to replicate the look and feel of actual warfare, but now, as more warfare becomes automated via drones, robots, etc., the military is borrowing its interface technologies from the gaming world. War is becoming more gamelike.”

If you can’t quite get passed the notion that Google Glass is nothing more than a white-tech-boy-fantasy, consider that this is Glass 1.0. Wearable tech is barely out of the toddler stage. Project this technology just a little further down the line–when it is less obtrusive, more seamless in its operation–and it may appear instead that Philbin, Scoble, and Topolsky have seen the future clearly, and it works addictively. Consider as well how some future version of Glass may combine with Leap Motion-style technology to fully deploy the technology-as-magic aesthetic or, also, the potential of Glass to interact with the much touted Internet of Things. Wave your hand, speak your command and things happen, the world obeys.

But letting this stand as a critique of Glass risks missing a deeper point. Technology and power are inseparable. Not all technologies empower in the same way, but all technologies empower in some way. And we should be particularly careful about technologies that grant power in social contexts. Power tends to objectify, and we could do without further inducements to render others as objects in our field of action.

In her wise and moving essay on the Iliad, Simone Weil characterized power’s manifestation in human affairs, what she calls force, as “the ability to turn a human being into a thing while he is still alive.” Power or force, then, is the ability to objectify. Deadly force, Weil observes, literally turns a person into a thing, a corpse. All less lethal deployments of force are derivative of this ultimate power to render a person a thing.

It is telling that the most vocal, and sometimes violent, opposition to Glass has come in response to its ability to document others, possibly without their awareness, much less consent. To be documented in such a way is to be objectified, and the intuitive discomfort others have felt in the presence of those wearing Glass is a reflection of an innate resistance to the force that would render us an object. In his excellent write up of Glass late last year, Clive Thompson noted that while from his perspective he was wearing a computer that granted quick, easy access to information, “To everyone else, I was just a guy with a camera on his head.” “Cameras are everywhere in public,” Thompson observes, “but one fixed to your face sends a more menacing signal: I have the power to record you at a moment’s notice, it seems to declare — and maybe I already am.”

Later on in her reflections on the Iliad, Weil observed, “The man who is the possessor of force seems to walk through a non-resistant element; in the human substance that surrounds him nothing has the power to interpose, between the impulse and the act, the tiny interval that is reflection.” Curiously, Google researcher and wearable-computing pioneer, Thad Starner, has written, “Wearables empower the user by reducing the time between their intention to do a task and their ability to perform that task.”

Starner, I’m certain, has only the best of intentions. In the same piece he writes compellingly about the potential of Glass to empower individuals who suffer from a variety of physical impairments. But I also believe that he may have spoken more than he knew. The collapse of the space between intention or desire on the one hand and action or realization on the other may be the most basic reality constituting the promise and allure of technology. We should be mindful, though, of all that such a collapse may entail. Following Weil, we might consider, at least, that the space between impulse and act is also the space for reflection, and, further, the space in which we might appear to one another as fully human persons rather than objects to be manipulated or layers of data to be mined.