Technology in the Classroom

I want to briefly draw your attention to a series of related posts about technology in the classroom, beginning with Clay Shirky’s recent post explaining his decision to have students put their wired digital devices away during class. Let me say that again: Clay Shirky has decided to ban lap tops from his classroom. Clay Shirky has long been one of the Internet’s leading advocates and cheerleader’s, so this is a pretty telling indication of the scope of the problem.

I particularly appreciated the way Shirky focused on what we might call the ecosystem of the classroom. The problem is not simply that connected devices distract the student who uses them and hampers their ability to learn:

“Anyone distracted in class doesn’t just lose out on the content of the discussion, they create a sense of permission that opting out is OK, and, worse, a haze of second-hand distraction for their peers. In an environment like this, students need support for the better angels of their nature (or at least the more intellectual angels), and they need defenses against the powerful short-term incentives to put off complex, frustrating tasks. That support and those defenses don’t just happen, and they are not limited to the individual’s choices. They are provided by social structure, and that structure is disproportionately provided by the professor, especially during the first weeks of class.”

I came across Shirky’s post via Nick Carr, who also considers a handful of studies that appear to support the decision to create a relatively low-tech classroom environment. I recommend you click through to read the whole thing.

If you’re thinking that this is a rather retrograde, reactionary move to make, then I’d suggest taking a quick look at Alan Jacob’s brief comments on the matter.

You might also want to ask yourself why the late Steve Jobs; Chris Anderson, the former editor at Wired and CEO of a robotics company; Evan Williams, the founder of Blogger, Twitter, and Medium; and a host of other tech-industry heavyweights deploy seemingly draconian rules for how their own children relate to digital devices and the Internet. Here’s Anderson: “My kids accuse me and my wife of being fascists and overly concerned about tech, and they say that none of their friends have the same rules.”

Perhaps they are on to something, albeit in a “do-as-I-say-not-as-I-do” sort of way. Nick Bilton has the story here.

__________________________

Okay, and now a quick administrative note. Rather than create a separate entry for this, I thought it best just to raise the matter at the tail end of this shorter post. Depending on how you ordinarily get to this site, you may have noticed that the feed for this blog now only gives you a snippet view and asks you to click through to read the whole.

I initially made this change for rather self-serving reasons related to the architecture of WordPress, and it was also going to be a temporary change. However, I realized that this change resolved a couple of frustrations I’d had for awhile.

The first of these centered on my mildly obsessive nature when it came to editing and revising. Invariably, regardless of what care I took before publishing, posts would get out with at least one or two typos, inelegant phrases, etc. When I catch them later, I fix them, but those who get their posts via email never got the corrections. If you have to click over to read the whole, however, you would always see the latest, cleanest version. Relatedly, I sometimes find it preferable to update a post with some related information or new links rather than create a new post (e.g.). It would be unlikely that email subscribers would ever see those updates unless they were clicking to the site for the most updated version of the post.

Consequently, I’m considering keeping the snippet feed. I do realize, though, that this might be mildly annoying, involving as it does an extra click or two. So, my question to you is this: do you care? I have a small but dedicated readership, and I’d hate to make a change that might ultimately discourage you from continuing to read. If you have any thoughts on the matter, feel free to share in the comments below or via email.

Also, I’ve been quite negligent about replying to comments of late. When I get a chance to devote some time to this blog, which is not often, I’m opting to write instead. I really appreciate the comments, though, and I’ll do my best to interact as time allows.

Waiting for Socrates … So We Can Kill Him Again and Post the Video on Youtube

It will come as no surprise, I’m sure, if I tell you that the wells of online discourse are poisoned. It will come as no surprise because critics have complained about the tone of online discourse for as long as people have interacted with one another online. In fact, we more or less take the toxic, volatile nature of online discourse for granted. “Don’t read the comments” is about as routine a piece of advice as “look both ways before crossing the street.” And, of course, it is also true that complaints about the coarsening of public discourse in general have been around for a lot longer than the Internet and digital media.

That said, I’ve been intrigued, heartened actually, by a recent round of posts bemoaning the state of online rhetoric from some of the most thoughtful people whose work I follow. Here is Freddie deBoer lamenting the rhetoric of the left, and here is Matthew Anderson noting much of the same on the right. Here is Alan Jacobs on why he’s stepping away from Twitter. Follow any of those links and you’ll find another series of links to thoughtful, articulate writers all telling us, more or less, that they’ve had enough. This piece urges civility and it suggests, hopefully (naively?), that the “Internet” will learn soon enough to police itself, but the evidence it cites along the way seems rather to undermine such hopefulness. I won’t bother to point you to some of the worst of what I’ve regrettably encountered online in recent weeks.

Why is this the case? Why, as David Sessions recently put it, is the state of the Internet awful?

Like everyone else, I have scattered thoughts about this. For one thing, the nature of the medium seems to encourage rancor, incivility, misunderstanding, and worse. Anonymity has something to do with this, and so does the abstraction of the body from the context of communication.

Along the same media-ecological lines, Walter Ong noted that oral discourse tends to be agonistic and literate discourse tends to be irenic. Online discourse tends to be conducted in writing, which might seem to challenge Ong’s characterization. But just as television and radio constituted what Ong called secondary orality, so might we say that social media is a form of secondary literacy, blurring the distinctions between orality and literacy. It is text based, but, like oral discourse, it brings people into a context of relative communicative immediacy. That is to say that through social media people are responding to one another in public and in short order, more as they would in a face-to-face encounter, for example, than in private letters exchanged over the course of months.

In theory, writing affords us the temporal space to be more thoughtful and precise in expressing our ideas, but, in practice, the expectations of immediacy in digital contexts collapse that space. So we lose the strengths of each medium: we get none of the meaning-making cues of face-to-face communication nor any of the time for reflection that written communication ordinarily grants. The media context, then, ends up being rife with misunderstanding and agonistic; it encourages performative pugilism.

Also, as the moral philosopher Alasdair MacIntyre pointed out some time ago, we no longer operate with a set of broadly shared assumptions about what is good and what shape a good life should take. Our ethical reasoning tends not to be built on the same foundation. Because we are reasoning from incompatible moral premises, the conclusions reached by two opposing parties tend to be interpreted as sheer stupidity or moral obtuseness. In other words, because our arguments, proceeding as they do from such disparate moral frameworks, fail to convince and persuade, we begin to assume that those who will not yield to our moral vision must thus be fools or worse. Moreover, we conclude, fools and miscreants cannot be argued with; they can only be shamed, shouted down, or otherwise silenced.

Digital dualism is also to blame. Some people seem to operate under the assumption that they are not really racists, misogynists, anti-Semites, etc.–they just play one on Twitter. It really is much too late in the game to play that tired card.

Perhaps, too, we’ve conflated truth and identity in such a way that we cannot conceive of a challenge to our views as anything other than a challenge to our humanity. Conversely, it seems that in some highly-charged contexts being wrong can cost you the basic respect one might be owed as a fellow human being.

Finally, the Internet is awful because, frankly, people are awful. We all are; at least we all can be under the right circumstances. As Solzhenitsyn put it, “If only there were evil people somewhere insidiously committing evil deeds, and it were necessary only to separate them from the rest of us and destroy them. But the line dividing good and evil cuts through the heart of every human being.”

To that list, I want to offer just one more consideration: a little knowledge is a dangerous thing and there are few things the Internet does better than giving everyone a little knowledge. A little knowledge is a dangerous thing because it is just enough to give us the illusion of mastery and a sense of authority. This illusion, encouraged by the myth of having all the world’s information at our finger tips, has encouraged us to believe that by skimming an article here or reading the summary of a book there we thus become experts who may now liberally pontificate about the most complex and divisive issues with unbounded moral and intellectual authority. This is the worst kind of insufferable foolishness, that which mistakes itself for wisdom without a hint of irony.

Real knowledge on the other hand is constantly aware of all that it does not know. The more you learn, the more you realize how much you don’t know, and the more hesitant you’ll be to speak as if you’ve got everything figured out. Getting past that threshold of “a little knowledge” tends to breed humility and create the conditions that make genuine dialogue possible. But that threshold will never be crossed if all we ever do is skim the surface of reality, and this seems to be the mode of engagement encouraged by the information ecosystem sustained by digital media.

We’re in need of another Socrates who will teach us once again that the way of wisdom starts with a deep awareness of our own ignorance. Of course, we’d kill him too, after a good skewering on Twitter, and probably without the dignity of hemlock. A posthumous skewering would follow, naturally, after the video of his death got passed around on Reddit and Youtube.

I don’t want to leave things on that cheery note, but the fact is that I don’t have a grand scheme for making online discourse civil, informed, and thoughtful. I’m pretty sure, though, that things will not simply work themselves out for the better without deliberate and sustained effort. Consider how W.H. Auden framed the difference between traditional cultures and modernity:

“The old pre-industrial community and culture are gone and cannot be brought back. Nor is it desirable that they should be. They were too unjust, too squalid, and too custom-bound. Virtues which were once nursed unconsciously by the forces of nature must now be recovered and fostered by a deliberate effort of the will and the intelligence. In the future, societies will not grow of themselves. They will be either made consciously or decay.”

For better or worse, or more likely both, this is where we find ourselves–either we deploy deliberate effort of will and intelligence or face perpetual decay. Who knows, maybe the best we can do is to form and maintain enclaves of civility and thoughtfulness amid the rancor, communities of discourse where meaningful conversation can be cultivated. These would probably remain small communities, but their success would be no small thing.

__________________________________

Update: After publishing, I read Nick Carr’s post on the revival of blogs and the decline of Big Internet. “So, yeah, I’m down with this retro movement,” Carr writes, “Bring back personal blogs. Bring back RSS. Bring back the fun. Screw Big Internet.” I thought that was good news in light of my closing paragraph.

And, just in case you need more by way of diagnosis, there’s this: “A Second Look At The Giant Garbage Pile That Is Online Media, 2014.”

Our Little Apocalypses

An incoming link to my synopsis of Melvin Kranzberg’s Six Laws of Technology alerted me to a short post on Quartz about a new book by an author named Michael Harris. The book, The End of Absence: Reclaiming What We’’ve Lost in a World of Constant Connection, explores the tradeoffs induced by the advent of the Internet. Having not read the book, I obviously can’t say much about it, but I was intrigued by one angle Harris takes that comes across in the Quartz piece.

Harris’s book is focused on the generation, a fuzzy category to be sure, that came of age just before the Internet exploded onto the scene in the early 90s. Here’s Harris:

“If you were born before 1985, then you know what life is like both with the internet and without. You are making the pilgrimage from Before to After.”

“If we’re the last people in history to know life before the internet, we are also the only ones who will ever speak, as it were, both languages. We are the only fluent translators of Before and After.”

It would be interesting to read what Harris does with this framing. In any case, it’s something I’ve thought about often. This is my fifteenth year teaching. Over the years I’ve noticed, with each new class, how the world that I knew as a child and as a young adult recedes further and further into the murky past. As you might guess, digital technology has been one of the most telling indicators.

Except for a brief flirtation with Prodigy on an MS-DOS machine with a monochrome screen, the Internet did not come into my life until I was a freshman in college. I’m one of those people Harris is writing about, one of the Last Generation to know life before the Internet. Putting it that way threatens to steer us into a rather unseemly romanticism, and, knowing that I’m temperamentally drawn to dying lights, I want to make sure I don’t give way to it. That said, it does seem to me that those who’ve known the Before and After, as Harris puts it, are in a unique position to evaluate the changes. Experience, after all, is irreducible and incommunicable.

One of the recurring rhetorical tropes that I’ve listed as a Borg Complex symptom is that of noting that every new technology elicits criticism and evokes fear, society always survives the so-called moral panic or techno-panic, and thus concluding, QED, that those critiques and fears, including those being presently expressed, are always misguided and overblown. It’s a pattern of thought I’ve complained about more than once. In fact, it features as the tenth of my unsolicited points of advice to tech writers.

Now while it is true, as Adam Thierer has noted here, that we should try to understand how societies and individuals have come to cope with or otherwise integrate new technologies, it is not the case that such negotiated settlements are always unalloyed goods for society or for individuals. But this line of argument is compelling to the degree that living memory of what has been displaced has been lost. I may know at an intellectual level what has been lost, because I read about it in a book for example, but it is another thing altogether to have felt that loss. We move on, in other words, because we forget the losses, or, more to the point, because we never knew or experienced the losses for ourselves–they were always someone else’s problem.

To be very clear and to avoid the pedantic, sanctimonious reply–although, in all honesty, I’ve gotten so little of that on this blog that I’ve come to think that a magical filter of civility vets all those who come by–let me affirm that yes, of course, I certainly would’ve made many trade-offs along the way, too. To recognize costs and losses does not mean that you always refuse to incur them, it simply means that you might incur them in something other than a naive, triumphalist spirit.

Around this time last year, an excerpt from Jonathan Franzen’s then-forthcoming edited work on Karl Krauss was published in the Guardian; it was panned, frequently and forcefully, and deservedly so in some respects. But the conclusion of the essay struck me then as being on to something.

“Maybe … apocalypse is, paradoxically, always individual, always personal,” Franzen wrote,

“I have a brief tenure on earth, bracketed by infinities of nothingness, and during the first part of this tenure I form an attachment to a particular set of human values that are shaped inevitably by my social circumstances. If I’d been born in 1159, when the world was steadier, I might well have felt, at fifty-three, that the next generation would share my values and appreciate the same things I appreciated; no apocalypse pending.”

But, of course, he wasn’t. He was born in the modern world, like all of us, and this has meant change, unrelenting change. Here is where the Austrian writer Karl Kraus, whose life straddled the turn of the twentieth century, comes in: “Kraus was the first great instance of a writer fully experiencing how modernity, whose essence is the accelerating rate of change, in itself creates the conditions for personal apocalypse.” Perhaps. I’m tempted to quibble with this claim. The words of John Donne, “Tis all in pieces, all coherence gone,” come to mind. Yet, even if Franzen is not quite right about the historical details, I think he’s given honest voice to a common experience of modernity:

“The experience of each succeeding generation is so different from that of the previous one that there will always be people to whom it seems that the key values have been lost and there can be no more posterity. As long as modernity lasts, all days will feel to someone like the last days of humanity. Kraus’s rage and his sense of doom and apocalypse may be the anti-thesis of the upbeat rhetoric of Progress, but like that rhetoric, they remain an unchanging modality of modernity.”

This is, perhaps, a bit melodramatic, and it is certainly not all that could be said on the matter, or all that should be said. But Franzen is telling us something about what it feels like to be alive these days. It’s true, Franzen is not the best public face for those who are marginalized and swept aside by the tides of technological change, tides which do not lift all boats, tides which may, in fact, sink a great many. But there are such people, and we do well to temper our enthusiasm long enough to enter, so far as it is possible, into their experience. In fact, precisely because we do not have a common culture to fall back on, we must work extraordinarily hard to understand one another.

Franzen is still working on the assumption that these little personal apocalypses are a generational phenomenon. I’d argue that he’s underestimated the situation. The rate of change may be such that the apocalypses are now intra-generational. It is not simply that my world is not my parents’ world; it is that my world now is not what my world was a decade ago. We are all exiles now, displaced from a world we cannot reach because it fades away just as its contours begin to materialize. This explains why, as I wrote earlier this year, nostalgia is not so much a desire for a place or a time as it is a desire for some lost version of ourselves. We are like Margaret, who in Hopkins’ poem, laments the passing of the seasons, Margaret to whom the poet’s voice says kindly, “It is Margaret you mourn for.”

Although I do believe that certain kinds of change ought to be resisted–I’d be a fool not to–none of what I’ve been trying to get at in this post is about resisting change in itself. Rather, I think all I’ve been trying to say is this: we must learn to take account of how differently we experience the changing world so that we might best help one another as we live through the change that must come. That is all.

Unplugged

I’m back. In fact, I’ve been back for more than a week now. I’ve been back from several days spent in western North Carolina. It’s beautiful country out there, and, where I was staying, it was beautiful country without cell phone signal or Internet connection. It was a week-long digital sabbath, or, if you prefer, a week-long digital detox. It was a good week. I didn’t find myself. I didn’t discover the meaning of life. I had no epiphanies, and I didn’t necessarily feel more connected to nature. But it was a good week.

I know that reflection pieces on technology sabbaths, digital detoxes, unplugging, and disconnecting are a dime a dozen. Slightly less common are pieces critical of the disconnectionists, as Nathan Jurgenson has called them, but these aren’t hard to come by either. Others, like Evgeny Morozov, have contributed more nuanced evaluations. Not only has the topic been widely covered, if you’re reading this blog I’d guess that you’re likely to be more or less sympathetic to these practices, even if you harbor some reservations about how they are sometimes presented and implemented. All of that to say, I’ve hesitated to add yet another piece on the experience of disconnection, especially since I’d be (mostly) preaching to the choir. But … I’m going to try your patience and offer just a few thoughts for your consideration.

First, I think the week worked well because its purpose wasn’t to disconnect from the Internet or digital devices; being disconnected was simply a consequence of where I happened to be. I suspect that when one explicitly sets out to disconnect, the psychology of the experience works against you. You’re disconnecting in order to be disconnected because you assume or hope it will yield some beneficial consequences. The potential problem with this scenario is that “being connected” is still framing, and to some degree defining, your experience. When you’re disconnected, you’re likely to be thinking about your experience in terms of not being connected. Call it the disconnection paradox.

This might mean, for example, that you’re overly aware of what you’re missing out on, thus distracted from what you hoped to achieve by disconnecting. It might also lead to framing your experience negatively in terms of what you didn’t do–which isn’t ultimately very helpful–rather than positively in terms of what you accomplished. In the worst cases, it might also lead to little more than self-congratulatory or self-loathing status updates.

In my recent case, I didn’t set out to be disconnected. In fact, I was rather disappointed that I’d be unable to continue writing about some of the themes I’d been recently addressing. So while I was carrying on with my disconnected week, I didn’t think at all about being connected or disconnected; it was simply a matter of fact. And, upon reflection, I think this worked in my favor.

This observation does raise a practical problem, however. How can one disconnect, if so desired, while avoiding the disconnection paradox? Two things come to mind. As Morozov pointed out in his piece on the practice of disconnection, there’s little point in disconnecting if it amounts to coming up for breath before plunging back into the digital flood. Ultimately, then, the idea is to so order our digital practices that enforced periods of disconnection are unnecessary.

But what if, for whatever reason, this is not a realistic goal? At this point we run up against the limits of individual actions and need to think about how to effect structural and institutional changes. Alongside those longterm projects, I’d suggest that making the practice of disconnection regular and habitual will eventually overcome the disconnection paradox.

Second consideration, obvious though it may be: it matters what you do with the time that you gain. For my part, I was more physically active than I would be during the course of an ordinary week, much more so. I walked, often; I swam; and I did a good bit of paddling too. Not all of this activity was pleasurable as it transpired. Some of it was exhausting. I was often tired and sore. But I welcomed all of it because it relieved the accumulated stress and tension that I tend to carry around on my back, shoulders, neck, and jaw, much of it a product of sitting in front of a computer or with a book for extended periods of time. It was a good week because at the end of it, my body felt as good as it had in a long time, even if it was a bit battered and ragged.

The feeling reminded me of what the Patrick Leigh Fermor wrote about his stay in a monastery early in the late 1950s, a kind of modernity detox. Initially, he was agitated, then he was overwhelmed for a few days by the desire to sleep. Finally, he emerged “full of energy and limpid freshness.” Here is how he described the experience in A Time to Keep Silence:

“The explanation is simple enough: the desire for talk, movements and nervous expression that I had transported from Paris found, in this silent place, no response or foil, evoked no single echo; after miserably gesticulating for a while in a vacuum, it languished and finally died for lack of any stimulus or nourishment. Then the tremendous accumulation of tiredness, which must be the common property of all our contemporaries, broke loose and swamped everything. No demands, once I had emerged from that flood of sleep, were made upon my nervous energy: there were no automatic drains, such as conversation at meals, small talk, catching trains, or the hundred anxious trivialities that poison everyday life. Even the major causes of guilt and anxiety had slid away into some distant limbo and not only failed to emerge in the small hours as tormentors but appeared to have lost their dragonish validity.”

“[T]he tremendous accumulation of tiredness, which must be the common property of all our contemporaries”–indeed, and to that we might add the tremendous accumulation of stress and anxiety. The Internet, always-on connectivity, and digital devices have not of themselves caused the tiredness, stress, and anxiety, but they haven’t helped either. In certain cases they’ve aggravated the problem. And, I’d suggest, they have done so regardless of what, specifically, we have been doing. Rather the aggravation is in part a function of how our bodies engage with these tools. Whether we spend a day in front of a computer perusing cat videos, playing Minecraft, writing a research paper, or preparing financial reports makes little difference to our bodies. It is in each case a sedentary day, and these are, as we all know, less than ideal for our bodies. And, because so much of our well-being depends on our bodies, the consequences extend to the whole of our being.

I know countless critics since the dawn of industrial society have lamented the loss of regular physical activity, particularly activity that unfolded in “nature.” Long before the Internet, such complaints were raised about the factory and the cubicle. It is also true that many of these calls for robust physical activity have been laden with misguided assumptions about the nature of masculinity and worse. But none of this changes the stubborn, intractable fact that we are embodied creatures and the concrete physicality of our nature is subject to certain limits and thrives under certain conditions and not others.

One further point about my experience: some of it was moderately risky. Not extreme sports-risky or risky bordering on foolish, you understand. More like “watch where you step there might be a rattle snake” risky (I avoided one by two feet or so) or “take care not to slip off the narrow trail, that’s a 300 foot drop” risky (I took no such falls, happily). I’m not sure what I can claim for all of this, but I would be tempted to make a Merleau-Ponty-esque argument about the sort of engagement with our surroundings that navigating risk requires of us. I’d modestly suggest, on a strictly anecdotal basis, that there is something mentally and physically salubrious about safely navigating the experience of risk. While we’re at, it plug-in the “troubles” (read, sometimes risky, often demanding activities) that philosopher Albert Borgmann encourages us to accept in principle.

Of course, it must immediately be added that this is a first-world-problem par excellence. Around the globe there are people who have no choice but to constantly navigate all sorts of risks to their well-being, and not of the moderate variety either. It must then seem perverse to suggest that some of us might need to occasionally elect to encounter risk, but only carefully so. Indeed, but such might nonetheless be the case. Certainly, it is also true that all of us are at risk everyday when walking a city street, or driving a car, or flying in a plane, and so on. My only rejoinder is again to lean on my experience and suggest that the sort of physical activity I engaged in had the unexpected effect of calling on and honing aspects of my body and mind that are not ordinarily called into service by my typical day-to-day experience, and this was a good thing. The accustomed risks we thoughtlessly take, crossing a street say, precisely because they are a routinized part of our experience do not call forth the same mental and bodily resources.

A final thought. Advocating disconnection sometimes raises the charges of elitism–Sherry Turkle strolling down Cape Cod beaches and what not. I more or less get where this is coming from, I think. Disconnection is often construed as a luxury experience. Who gets to placidly stroll the beaches of Cape Cod anyway? And, indeed, it is an unfortunate feature of modernity’s unfolding that what we eliminate from our lives, often to make room for one technology or another, we then end up compensating for with another technology because we suddenly realized that what we eliminated might have been useful and health-giving.

It was Neil Postman, I believe, who observed that having eliminated walking by the adoption of the automobile and the design of our public spaces, we then invented a machine on which we could simulate walking in order to maintain a minimal level of fitness. Postman’s chief focus, if I remember the passage correctly, was to point out the prima facie absurdity of the case, but I would add an economic consideration: in this pattern of technological displacement and replacement, the replacement is always a commodity. No one previously paid to walk, but the treadmill and the gym membership are bought and sold. So it is now with disconnection, it is often packaged as a commodified experience that must be bought, and the costs of disconnection (monetary and otherwise) are for some too hight to bear. This is unfortunate if not simply tragic.

But it seems to me that the answer is not to dismiss the practice of disconnecting as such or efforts to engage more robustly with the wider world. If these practices are, even in small measure, steps toward human flourishing, then our task is to figure out how we can make them as widely available as possible.

The Facebook Experiment, Briefly Noted

More than likely, you’ve recently heard about Facebook’s experiment in the manipulation of user emotions. I know, Facebook as a whole is an experiment in the manipulation of user emotions. Fair enough, but this was a more pointed experimented that involved the manipulation of what user’s see in their News Feeds. Here is how the article in the Proceedings of the National Academy of Sciences  summarized the significance of the findings:

“We show, via a massive (N = 689,003) experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness. We provide experimental evidence that emotional contagion occurs without direct interaction between people (exposure to a friend expressing an emotion is sufficient), and in the complete absence of nonverbal cues.”

Needless to say (well except to Facebook and the researchers involved), this massive psychological experiment raises all sorts of ethical questions and concerns. Here are some of the more helpful pieces I’ve found on the experiment and its implications:

Update: Here are two more worth considering:

Those four six pieces should give you a good sense of the variety of issues involved in the whole situation, along with a host of links to other relevant material.

I don’t have too much to add except two quick observations. First, I was reminded, especially by Gurstein’s post, of Greg Ulmer’s characterization of the Internet as a “prothesis of the unconscious.” Ulmer means something like this: The Internet has become a repository of the countless ways that culture has imprinted itself upon us and shaped our identity. Prior to the advent of the Internet, most of those memories and experiences would be lost to us even while they may have continued to be a part of who we became. The Internet, however, allows us to access many, if not all, of these cultural traces bringing them to our conscious awareness and allowing us to think about them.

What Facebook’s experiment suggests rather strikingly is that such a prosthesis is, as we should have known, a two-way interface. It not only facilitates our extension into the world, it is also a means by which the world can take hold of us. As a prothesis of our unconscious, the Internet is not only an extension of our unconscious, it also permits the manipulation of the unconscious by external forces.

Secondly, I was reminded of Paul Virilio’s idea of the general or integral accident. Virilio has written extensively about technology, speed, and accidents. The accident is an expansive concept in his work. In his view, accidents are built-in to the nature of any new technology. As he has frequently put it, “When you invent the ship, you also invent the shipwreck; when you invent the plane you also invent the plane crash … Every technology carries its own negativity, which is invented at the same time as technical progress.”

The general or integral accident is made possible by complex technological systems. The accident made possible by a nuclear reactor or air traffic is obviously of a greater scale than that made possible by the invention of the hammer. Complex technological systems create the possibility of cascading accidents of immense scale and destructiveness. Information technologies also introduce the possibility of integral accidents. Virilio’s most common examples of these information accidents include the flash crashes on stock exchanges induced by electronic trading.

All of this is to say that Facebook’s experiment gives us a glimpse of what shape the social media accident might take. An interviewer alludes to this line from Virilio: “the synchronizing of collective emotions that leads to the administration of fear.” I’ve not been able to track down the original context, but it struck me as suggestive in light of this experiment.

Oh, and lastly, Facebook COO Sheryl Sandberg issued an apology of sorts. I’ll let Nick Carr tell you about it.