The Facebook Experiment, Briefly Noted

More than likely, you’ve recently heard about Facebook’s experiment in the manipulation of user emotions. I know, Facebook as a whole is an experiment in the manipulation of user emotions. Fair enough, but this was a more pointed experimented that involved the manipulation of what user’s see in their News Feeds. Here is how the article in the Proceedings of the National Academy of Sciences  summarized the significance of the findings:

“We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”

Needless to say (well except to Facebook and the researchers involved), this massive psychological experiment raises all sorts of ethical questions and concerns. Here are some of the more helpful pieces I’ve found on the experiment and its implications:

Update: Here are two more worth considering:

Those four six pieces should give you a good sense of the variety of issues involved in the whole situation, along with a host of links to other relevant material.

I don’t have too much to add except two quick observations. First, I was reminded, especially by Gurstein’s post, of Greg Ulmer’s characterization of the Internet as a “prothesis of the unconscious.” Ulmer means something like this: The Internet has become a repository of the countless ways that culture has imprinted itself upon us and shaped our identity. Prior to the advent of the Internet, most of those memories and experiences would be lost to us even while they may have continued to be a part of who we became. The Internet, however, allows us to access many, if not all, of these cultural traces bringing them to our conscious awareness and allowing us to think about them.

What Facebook’s experiment suggests rather strikingly is that such a prosthesis is, as we should have known, a two-way interface. It not only facilitates our extension into the world, it is also a means by which the world can take hold of us. As a prothesis of our unconscious, the Internet is not only an extension of our unconscious, it also permits the manipulation of the unconscious by external forces.

Secondly, I was reminded of Paul Virilio’s idea of the general or integral accident. Virilio has written extensively about technology, speed, and accidents. The accident is an expansive concept in his work. In his view, accidents are built-in to the nature of any new technology. As he has frequently put it, “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash … Every technology carries its own negativity, which is invented at the same time as technical progress.”

The general or integral accident is made possible by complex technological systems. The accident made possible by a nuclear reactor or air traffic is obviously of a greater scale than that made possible by the invention of the hammer. Complex technological systems create the possibility of cascading accidents of immense scale and destructiveness. Information technologies also introduce the possibility of integral accidents. Virilio’s most common examples of these information accidents include the flash crashes on stock exchanges induced by electronic trading.

All of this is to say that Facebook’s experiment gives us a glimpse of what shape the social media accident might take. An interviewer alludes to this line from Virilio: “the synchronizing of collective emotions that leads to the administration of fear.” I’ve not been able to track down the original context, but it struck me as suggestive in light of this experiment.

Oh, and lastly, Facebook COO Sheryl Sandberg issued an apology of sorts. I’ll let Nick Carr tell you about it.

Links: Jacques Ellul, Luddites, and More

A follow-up to my last post is still forthcoming. In the meantime, here are a few links for your reading pleasure.

At The New Atlantis, Joshua Schulz writes about “Machine Grading and Moral Learning.” The essay is a critique of machine-based grading of student essays, but it ranges widely and deeply in its argument. Here’s an excerpt:

“The functional- and efficiency-centric view of technology, and the moral objections to it, have been around for a long time. Look to the tale of John Henry, the steel-driving man of American folklore who raced a tunnel-boring steam engine in a contest of efficiency, beating the machine but dying in the attempt. The moral of the tale is not, of course, that we will always be able to beat our machines in a fair contest. Rather, the contest is a tragic one, highlighting a cultural hamartia, namely, the belief that competing with the steam engine on its own terms is anything other than degrading.”

A post at Librarian Shipwreck asks, “Whose Afraid of General Ludd?” You may remember that Borg Complex symptoms include, “Uses the term Luddite a-historically and as a casual slur.” That observation is echoed here:

“Whenever the term ‘Luddite’ appears as an insult it acts less as a reflection of the motives of those being slurred and more as a reflection of the fears of the person delivering the insult. But far from undermining Luddism, all that these insults do is underscore the tremendous power that a critique of technology couched in ‘commonality’ can still command.”

Read the whole thing for a historically grounded look at the Luddites and their motives.

Relatedly, here is a video and transcript of an interview with the late Jacques Ellul posted at Second Nature Journal.  His insights resonate still. Here are two from the interview:

“Technology also obliges us to live more and more quickly. Inner reflection is replaced by reflex. Reflection means that, after I have undergone an experience, I think about that experience. In the case of a reflex you know immediately what you must do in a certain situation. Without thinking. Technology requires us no longer to think about the things. If you are driving a car at 150 kilometers an hour and you think you’ll have an accident. Everything depends on reflexes. The only thing technology requires us is: Don’t think about it. Use your reflexes.

Technology will not tolerate any judgment being passed on it. Or rather: technologists do not easily tolerate people expressing an ethical or moral judgment on what they do. But the expression of ethical, moral and spiritual judgments is actually the highest freedom of mankind. So I am robbed of my highest freedom. So whatever I say about technology and the technologists themselves is of no importance to them. It won’t deter them from what they are doing. They are now set in their course. They are so conditioned.”

Keep that last paragraph in mind as you read this last story: “For One Baby, Life Begins with Genome Revealed.”

 

Mechanization, Automation, Animation: Enchanting the Human-Built World

If you’re paying any attention at all to news coming out of the tech sector, it’s hard to go a day without coming across a story about a new robot, app, or tool that promises (or threatens) to do what we previously did for ourselves. Some of these tasks involve physical labor, but increasingly they involve cognitive, emotional, and even ethical labor. Thinking carefully about the implications of this trend is, in my view, one of the most important tasks before us.

Taking a comment from Adam Thierer on a recent post about “smart-homes” as my point of departure, I propose that we think about the trend described above as a three-step process aimed at the enchantment of the human-built world.

In Adam’s view, as I read his comment, the “smartness” of the “smart home” is simply an extension of the many ways that we have already automated household tasks over the course of the last 100 years or so. Moreover, to my claim that a “smart home” is “an active technological system that requires maintenance and regulation,” Adam commented, “Even in the days of mud huts and log cabins that was somewhat true.”¹

In my initial response to Adam, I suggested that while it is true that more primitive homes, huts and cabins if you will, involved technology and might even be considered technological systems if we press the semantic range of that phrase a bit, there were important discontinuities as well. To clarify that claim, I began by making some distinctions using home heating as an example.

For most of human history, if I wanted warmth in my home I would need to build and sustain a fire of some sort. I could, for example, build a fire in a fireplace or I could light one in a coal burning stove. This would require a good deal of effort and caution. In short, it required a significant amount of engagement on my part, physical and mental.

Then along came the furnace and central heating. I no longer needed to build and sustain a fire. I could simply flip a switch and a machine would generate the heat and disperse it throughout my home. But I would hesitate to call this an instance of automation. Instead, let’s call it mechanization. Central heating, machine heating if you will, mechanizes the work of providing heat. And, of course, with that mechanization comes far less engagement on my part. In fact, this initial step is probably the most obvious and striking point of discontinuity in the evolution of home heating.

Add a thermostat and I no longer need to actively monitor the temperature in order to keep my home comfortably warm. I can set the thermostat at a toasty 73 degrees and trust the system to do the work for me. Ease of use continues to advance as my degree of engagement diminishes. Now, I think, we can talk about automation. Of course, thermostats of varying degrees of sophistication are available. The simplest models allow you to set just one temperature and require you to manually change that setting if you want the temperature to adjust over the course of the day. More elaborate, digital thermostats allow you to program a series of temperature changes throughout the course of the day and for different days of the week. All of these, however, allow me to automate the functioning of the machine. But, we should note, altering these settings still required direct action on my part.

It seems that the next step in this progression is something like Nest, a thermostat that “learns” your preferences and takes over management of the temperature for you. Nest is illustrative of the “smart” trend that promises something more than simple automation. Tools like Nest take automation further along the path toward autonomous functionality by “learning” to regulate themselves. Nest can also be controlled with a smart phone. In other words, it can be networked; it can “talk” to other devices. It is, then, a potential component of the assemblage of technologies that together constitute the so-called Internet of Things, one manifestation of which is the “smart home.” At this point, I am thoroughly disengaged from the process of providing heat for my home. I don’t need to cut wood or start a fire. I don’t need to flip a switch. I don’t need to adjust controls. Without labor, attention, or decisions on my part, my home is comfortably heated.

I remain uncertain about what to call this last step. Mechanization, automation, and … what?

In my previous post, I couldn’t quite resist an allusion to the famous scene in Boris Karloff’s Frankenstein where Dr. Frankenstein shouts, “It’s alive. It’s alive!” The allusion suggested “animation” a name for the third step after mechanization and automation. That strikes me as a provocative and vivid word choice, but it also threatens to mystify more than it clarifies. I mean the term in a figurative sense, but it may be too easy to suppose that something more literal is intended. Nonetheless, throwing caution to the wind, I’m going to go with animation, at least for the time being. Blogging is nothing, if not provisional, right?

So then, we have three discernible stages–mechanization, automation, animation–in the technological enchantment of the human-built world. The technological enchantment of the human-built world is the unforeseen consequence of the disenchantment of the natural world described by sociologists of modernity, Max Weber being the most notable. These sociologists claimed that modernity entailed the rationalization of the world and the purging of mystery, but they were only partly right. It might be better to say that the world was not so much disenchanted as it was differently enchanted. This displacement and redistribution of enchantment may be just as important a factor in shaping modernity as the putative disenchantment of nature.

In an offhand, stream-of-consciousness aside, I ventured that the allure of the smart-home, and similar technologies, arose from a latent desire to re-enchant the world. I’m doubling-down on that hypothesis. Here’s the working thesis: the ongoing technological enchantment of the human-built world is a corollary of the disenchantment of the natural world. The first movement yields the second, and the two are interwoven. To call this process of technological animation an enchantment of the human-built world is not merely a figurative post-hoc gloss on what has actually happened. Rather, the work of enchantment has been woven into the process all along.

In support of this claim we might consider, first, the entanglement of technology and magic just as the process of disenchantment is taking off in the early-modern period as well as the pervasive and, secondly, the persistent presence of the religion of technology within the western technological project.

I’m going to leave it at that for now. In a subsequent post, I’ll bring Hannah Arendt’s discussion of labor, work, and action into the discussion to help us think about the trade-offs involved in this enchantment of the human-built world.

[Discussion continued here: More On Mechanization, Automation, Animation.]

________________________________________________________________

¹ There is a methodological question lying beneath the surface of this exchange: how do we wisely weigh the relevant degrees of continuity and discontinuity with older technology as we think about new technology? This can be tricky. There can be a tendency to exaggerate either the continuity or the discontinuity. In both cases, there would be nothing at all to learn because either nothing has changed and thus nothing needs to be learned, or else everything has changed and nothing of use can be learned. In the most unhelpful cases, both exaggerations are simultaneously affirmed. To generate hype, proponents of a new technology breathlessly proclaim its revolutionary character while at the same disingenuously allaying criticism by insisting the new revolutionary technology is really just like any number of other technologies that preceded it.

 

 

The Pleasures of Laborious Reading

I’m hoping to begin posting a bit more frequently soon. First up will be a follow-up to my last post about smart-homes. Until then, here’s a piece by Freddie deBoer well worth your time: “in order to read, start reading.”

DeBoer laments how difficult it has become for many of us to read works that demand sustained attention. This, of course, was the concern that animated Nick Carr’s well-known 2008 essay, “Is Google Making Us Stupid?”

To counteract this trend, deBoer recommends that we take up what he calls a “project book.” As he lays it out, this strikes me as good advice. Along the way, deBoer also makes a series of characteristically trenchant observations about the Internet and what we might call Internet culture. For instance:

“The internet has an immune system, a tendency to produce pushback and resistance to arguments not just about the drawbacks and downsides of endless internet connectivity, but to the very notion of moderation in our use. There is something about the habitual aspects of the internet, the “more, now, again” aspects, that couple with the vague sense of embarrassment we feel about constant internet use to produce a default posture of insecurity and defensiveness about these behaviors.”

Do read the whole thing. What deBoer challenges is, in my view, one of the great temptations of our age: the willingness to abandon or outsource all sorts of labor–intellectual, moral, emotional–the fruits and pleasures of which can be had no other way.

It’s Alive, It’s Alive!

Your home, that is. It soon may be, anyway.

Earlier this week at the Worldwide Developers Conference, Apple introduced HomeKit, an iOS 8 application that will integrate the various devices and apps which together transform an ordinary home into a “smart home.”

The “smart home,” like the flying car, has long been a much anticipated component of “the future.” The Jetsons had one, and, more recently, the Iron Man films turned Tony Stark’s butler, Edwin Jarvis, into JARVIS, an AI system that powers Stark’s very smart home. Note, in passing, the subtle tale of technological unemployment.

But the “smart home” is a more plausible element of our future than the flying car. Already in 1990, the Unity System offered a rather rudimentary iteration. And, as early as 1999, in the pages of Newsweek, Steven Levy was announcing the immanent arrival of what is now commonly referred to as the Internet of Things, the apotheosis of which would be the “smart home.” Levy didn’t call it the “smart home,” although he did refer to the “smart toilet,” but a “smart home” is what he was describing:

“Your home, for instance, will probably have one or more items directly hot-wired to the Internet: a set-top television box, a game console, a server sitting in the basement, maybe even a traditional PC. These would be the jumping-off points for a tiny radio-frequency net that broadcasts throughout the house. That way the Internet would be, literally, in the air. Stuff inside the house would inhale the relevant bits. Your automatic coffee maker will have access to your online schedule, so if you’re out of town it’ll withhold the brew. Your alarm clock might ring later than usual if it logs on to find out that you don’t have to get the kids ready for school–snow day! And that Internet dishwasher? No, it won’t be bidding on flatware at eBay auctions. Like virtually every other major appliance in your home, its Internet connection will be used to contact the manufacturer if something goes wrong.”

Envisioning this “galaxy” of digitally networked things, Levy already hints at the challenge of getting everything to work together in efficient and seamless fashion. That’s exactly were Apple is hoping to step in with HomeKit. At WDC, Apple’s VP humbly suggested that his company could “bring some rationality to this space.” Of course, as Megan Garber puts it, “You could see it as Apple’s attempt to turn the physical world into a kind of App Store: yet another platform. Another area whose gates Apple keeps.”

When news broke about HomeKit, I was reminded of an interview the philosopher of technology Albert Borgmann gave several years ago. It was that interview, in fact, that led me to the piece by Levy. Borgmann was less than impressed with the breathless anticipation of the “smart home.”

“In the perfectly smart home,” Borgmann quipped, “you don’t do anything.”

Writing in the Wall Street Journal, Geoffrey Fowler, gave one example of what Apple projected HomeKit could do:  “Users would be able to tell their Siri virtual assistant that they are ‘going to bed’ and their phone would dim the lights, lock your doors and set the thermostat, among other tasks.”

There’s apparently something alluring and enchanting about such a scenario. I’m going to casually suggest that the allure might be conceived as arising from a latent desire to re-enchant the world. The advent of modernity disenchanted the pre-modern world according to a widely accepted socio-historical account of the modern world. Gone were the spirits and spiritual forces at work in the world. Gone were the angles and witches and fairies. Gone was the mysticism that inspired both fear and wonder. All that remained was the sterile world of lifeless matter … and human beings alone in a vast universe that took no notice of them.

Technologies that make the environment responsive to our commands and our presence, tools that would be, presumably, alert to our desires and needs, even those we’ve not yet become aware of–such technologies promise to re-enchant the world, to make us feel less alone perhaps. They are the environmental equivalent of the robots that promise to become our emotional partners.

Borgmann, however, is probably right about technologies of this sort, “After a week you don’t notice them anymore. They mold into the inconspicuous normalcy of the background we now take for granted. These are not things that sustain us.”

Christopher Mims landed even nearer to the mark when he recently tweeted, “Just think how revolutionary the light switch would seem if until now we’d all been forced to control our homes through smartphones.”

Finally, in his WSJ story, Fowler wrote, “[Apple] is hoping it can become a hub of connected devices that, on their own, don’t do a very good job of helping you control a home.”

That last phrase is arresting. Existing products don’t do a very good job of helping you control your home. Interestingly though, I’ve never really thought of my home as something I needed to control. The language of control suggests that a “smart home” is an active technological system that requires maintenance and regulation. It’s a house come alive. Of course, it’s worth remembering that the pursuit of control is always paired with varying degrees of anxiety.