Facebook Doesn’t Care About Your Children

Facebook is coming for your children.

Is that framing too stark? Maybe it’s not stark enough.

Facebook recently introduced Messenger Kids, a version of their Messenger app designed for six to twelve year olds. Antigone Davis, Facebook’s Public Policy Director and Global Head of Safety, wrote a blog post introducing Messenger Kids and assuring parents the app is safe for kids.

“We created an advisory board of experts,” Davis informs us. “With them, we are considering important questions like: Is there a ‘right age’ to introduce kids to the digital world? Is technology good for kids, or is it having adverse affects on their social skills and health? And perhaps most pressing of all: do we know the long-term effects of screen time?”

The very next line of Davis’s post reads, “Today we’re rolling out our US preview of Messenger Kids.”

Translation: We hired a bunch of people to ask important questions. We have no idea what the answers may be, but we built this app anyway.

Davis doesn’t even attempt to fudge an answer to those questions. She raises them and never comes back to them again. In fact, she explicitly acknowledges “we know there are still a lot of unanswered questions about the impact of specific technologies on children’s development.” But you know, whatever.

Naturally, we’re presented with statistics about the rates at which children under 13 use the Internet, Internet-enabled devices, and social media. It’s a case from presumed inevitability. Kids are going to be online whether you like it or not, so they might as well use our product. More about this in a moment.

We’re also told that parents are anxious about their kid’s safety online. Chiefly, this amounts to concerns about privacy or online predators. Valid concerns, of course, and Facebook promises to give parents control over their kids online activity. However, safety, in this sense, is not the only concern we should have. A perfectly safe technology may nonetheless have detrimental consequences for our intellectual, moral, and emotional well-being and for the well-being of society when the technology’s effects are widely dispersed.

Finally, we’re given five principles Facebook and its advisory board developed in order to guide the development of their suite of products for children. These are largely meaningless sentences composed of platitudes and buzzwords.

Let’s not forget that this is the same company that “offered advertisers the opportunity to target 6.4 million younger users, some only 14 years old, during moments of psychological vulnerability, such as when they felt ‘worthless,’ ‘insecure,’ ‘stressed,’ ‘defeated,’ ‘anxious,’ and like a ‘failure.'”

Facebook doesn’t care about your children. Facebook cares about your children’s data. As Wired reported, “The company will collect the content of children’s messages, photos they send, what features they use on the app, and information about the device they use.”

There are no ads on Messenger Kids the company is quick to point out. “For now,” I’m tempted to add. Barriers of this sort tend to erode over time. Moreover, even if the barrier holds, an end game remains.

“If they are weaned on Google and Facebook,” Jeffrey Chester, executive director for the Center of Digital Democracy, warns, “you have socialized them to use your service when they become an adult. On the one hand it’s diabolical and on the other hand it’s how corporations work.”

Facebook’s interest in producing an app for children appears to be a part of a larger trend. “Tech companies have made a much more aggressive push into targeting younger users,” the same Wired article noted, “a strategy that began in earnest in 2015 when Google launched YouTube Kids, which includes advertising.”

In truth, I think this is about more than just Facebook. It’s about thinking more carefully about how technology shapes our children and their experience. It is about refusing the rhetoric of inevitability and assuming responsibility.

Look, what if there is no safe way for seven-year-olds to use social media or even the Internet and Internet-enabled devices? I realize this may sound like head-in-the-ground overreaction, and maybe it is, but perhaps it’s worth contemplating the question.

I also realize I’m treading on sensitive ground here, and I want to proceed with care. The last thing over-worked, under-supported parents need is something more to feel guilty about. Let’s forget the guilt. We’re all trying to do our best. Let’s just think together about this stuff.

As adults, we’ve barely got a handle on the digital world. We know devices and apps and platforms are designed to capture and hold attention in a manner that is intellectually and emotionally unhealthy. We know that these design choices are not made with the user’s best interest in mind. We are only now beginning to recognize the personal and social costs of our uncritical embrace of constant connectivity and social media. How eager should we be to usher our children in to this reality?

The reality is upon them whether we like it or not, someone might counter. Maybe, but I don’t quite buy it. Even if it is, the degree to which this is the case will certainly vary based in large part upon the choices parents make and their resolve.

Part of our problem is that we think too narrowly about technology, almost always in terms of functionality and safety. With regards to children, this amounts to safeguarding against offensive content, against exploitation, and against would-be predators. Again, these are valid concerns, but they do not exhaust the range of questions we should be asking about how children relate to digital media and devices.

To be clear, this is not only about preventing “bad things” from happening. It is also a question of the good we want to pursue.

Our disordered relationship with technology is often a product of treating technology as an end rather than a means. Our default setting is to uncritically adopt and ask questions later if at all. We need, instead, to clearly discern the ends we want to pursue and evaluate technology accordingly, especially when it comes to our children because in this, as in so much else, they depend on us.

Some time ago, I put together a list of 41 questions to guide our thinking about the ethical dimensions of technology. These questions are a useful way of examining not only the technology we use but also the technology to which we introduce our children.

What ideals inform the choices we make when we raise children? What sort of person do we hope they will become? What habits do we desire for them cultivate? How do we want them to experience time and place? How do we hope they will perceive themselves? These are just a few of the questions we should be asking.

Your answers to these questions may not be mine or your neighbor’s, of course. The point is not that we should share these ideals, but that we recognize that the realization of these ideals, whatever they may be for you and for me, will depend, in greater measure than most of us realize, on the tools we put in our children’s hands. All that I’m advocating is that we think hard about this and proceed with great care and great courage. Great care because the stakes are high; great courage because merely by our determination to think critically about these matters we will be setting ourselves against powerful and pervasive forces.


Tip the Writer

$1.00

Ivan Illich on Technology and Labor

From Ivan Ilich’s Tools for Conviviality (1973), a book that emerged out of conversations at The Center for Intercultural Documentation (CIDOC) in Cuernavaca, Mexico.

For a hundred years we have tried to make machines work for men and to school men for life in their service. Now it turns out that machines do not “work” and that people cannot be schooled for a life at the service of machines. The hypothesis on which the experiment was built must now be discarded. The hypothesis was that machines can replace slaves. The evidence shows that, used for this purpose, machines enslave men. Neither a dictatorial proletariat nor a leisure mass can escape the dominion of constantly expanding industrial tools.

The crisis can be solved only if we learn to invert the present deep structure of tools; if we give people tools that guarantee their right to work with high, independent efficiency, thus simultaneously eliminating the need for either slaves or masters and enhancing each person’s range of freedom. People need new tools to work with rather than tools that “work” for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves.

[…]

As the power of machines increases, the role of persons more and more decreases to that of mere consumers.

[…]

This world-wide crisis of world-wide institutions can lead to a new consciousness about the nature of tools and to majority action for their control. If tools are not controlled politically, they will be managed in a belated technocratic response to disaster. Freedom and dignity will continue to dissolve into an unprecedented enslavement of man to his tools.

Illich is among the older thinkers whose work on technology and society I think remains instructive and stimulating. These considerations seem especially relevant to our debates about automation and employment.

Illich was wide-ranging in his interests. Early in the life of this blog, I frequently cited In the Vineyard of the Text, his study of the evolution of writing technologies in the late medieval period.

The Rhetorical “We” and the Ethics of Technology

“Questioning AI ethics does not make you a gloomy Luddite,” or so the title of a recent article in a London business newspaper assures us. The most important thing to be learned here is that someone feels this needs to be said. Beyond that, there is also something instructive about the concluding paragraphs. If we read them against the grain, these paragraphs teach us something about how difficult it is to bring ethics to bear on technology.

“I’m simply calling for us to use all the tools at our disposal to build a better digital future,” the author tells us.

In practice, this means never forgetting what makes us human. It means raising awareness and entering into dialogue about the issue of ethics in AI. It means using our imaginations to articulate visions for a future that’s appealing to us.

If we can decide on the type of society we’d like to create, and the type of existence we’d like to have, we can begin to forge a path there.

All in all, it’s essential that we become knowledgeable, active, and influential on AI in every small way we can. This starts with getting to grips with the subject matter and past extreme and sensationalised points of view. The decisions we collectively make today will influence many generations to come.

Here are the challenges:

We have no idea what makes us human. You may, but we don’t.

We have nowhere to conduct meaningful dialogue; we don’t even know how to have meaningful dialogue.

Our imaginations were long ago surrendered to technique.

We can’t decide on the type of society we’d like to create or the type of existence we’d like to have, chiefly because this “we” is rhetorical. It is abstract and amorphous.

There is no meaningful antecedent to the pronouns we and us used throughout these closing paragraphs. Ethics is a communal business. This is no less true with regards to technology, perhaps it is all the more true. There is, however, no we there.

As individuals, we are often powerless against larger forces dictating how we are to relate to technology. The state is in many respects beholden to the technological–ideologically, politically, economically. Regrettably, we have very few communities located between the individual and the state constituting a we that can meaningfully deliberate and effectively direct the use of technology.

Technologies like AI emerge and evolve in social spaces that are resistant to substantial ethical critique. They also operate at a scale that undermines the possibility of ethical judgment and responsibility. Moreover, our society is ordered in such a way that there is very little to be done about it, chiefly because of the absence of structures that would sustain and empower ethical reflection and practice, the absence, in other words, of a we that is not merely rhetorical.


Tip the Writer

$1.00

Recovering the Tech Critical Canon

Melvin Kranzberg was spotlighted in the Wall Street Journal this past weekend thanks to Christopher Mims. Kranzberg’s is not exactly a household name. He was a historian of technology who taught for many years at the Georgia Institute of Technology. In 1985, he gave a talk at the annual meeting of the Society for the History of Technology in which he outlined six laws or “truisms” about technology “deriving from a longtime immersion in the study of the development of technology and its interactions with sociocultural change.”

The first of these laws is the best known: “Technology is neither good nor bad; nor is it neutral.” You can read about the rest of the laws in this earlier post. They are not well-known outside of a relatively small circle of historians, philosophers, and critics specializing in the academic study of technology.

Mims, a journalist covering the tech industry, was recently introduced to the Kranzberg’s Laws, and to his credit he has now introduced Kranzberg’s Laws to many more people who, in all likelihood, would never have encountered Kranzberg’s very useful insights. In tweeting out the story, Mims acknowledged that he had never come across Kranzberg’s work despite covering the technology beat for over a decade.

I recount all of this as a way of explaining why I have recently decided to make a point of posting excerpts from what I’m calling the Tech Critical Canon*. I’m thinking of a long tradition of writing from a wide array of disciplines that has focused critical attention on the social and moral consequences of technology. The tradition includes historians, philosophers, sociologists, theologians, linguists, media theorists, economists, cultural critics, journalists, novelists, and many individuals whose interests and specializations truly defy disciplinary boundaries. In fact, this is what makes many of these older critics so interesting. They were eclectic and eccentric in their convictions, training, and life experiences. It was this eccentric eclecticism allowed them to see more clearly than most what was happening.

These individuals do not speak with one voice; their perspectives often differ and their evaluations are not easily synthesized; and, of course, they were often wrong. But their writing, even when it is decades old, remains valuable. It is abundantly clear, however, that it is virtually unknown. Within the last month or so, I’ve commented on how recent cries for serious thinking about the political and ethical consequences of technology ignores the work of this diverse assortment of scholars and writers who have been doing just that for more than a century.

I am especially interested in the work of older critics, critics whose work appeared in the early and mid-twentieth century. I find these critics especially useful precisely because of their distance from the present. As I’ve noted elsewhere, if we read only contemporary sources on tech, we would be unlikely to overcome our chief obstacle: our thinking is already shaped by the very phenomena we seek to understand. The older critics offer a fresh vantage point and effectively new perspectives. They begin with different assumptions and operate with forgotten norms. Moreover, their mistakes will not be ours. (My point here is not unlike that made by C.S. Lewis writing in defense of old books.)

Chiefly, their distance from us and their proximity to older configurations of culture and technology means that they can imagine modes of life and ways of being with technology that we can no longer experience or imagine when we rely only on the work of contemporary critics, much of which is, of course, essential. As Andrew Russell pointed out, Kranzberg helped foster the valuable work of scholars who continue to produce important work advancing our understanding of technology and its consequences.

I fully recognize that I may very well be guilty of a common disorder: thinking that what the world really needs more of is the very thing about which I happen to care and with which I have a measure of aptitude. I also realize that my scribblings here will not amount to even a blip on the cultural radar. Be that as it may.

I’ve already posted a couple of excerpts (here and here). I’ll continue to do so and group these posts under the tag Tech Critical Canon. I do not plan to offer much commentary in these posts. I think the excerpts will stand well on their own. I trust that they will help us think more deeply about technology and lead us toward understanding and wisdom. I hope, as well, that they will function as teasers that induce readers to read the sources for themselves. Indeed, that is more or less how I’ve always thought about what I do here anyway: connecting readers to the really important work on technology that they may not encounter otherwise.

Admittedly, I’m not exactly hopeful that the recent interest in the ethics of technology and the work of thinkers such as Melvin Kranzberg will persist or that it will translate into meaningful change in the way that we order our society and our lives. But who knows?


*I should clarify, of course, that I mean critical in the same sense that we speak of a film critic or an art critic. The term does not necessarily imply only negative evaluations. For more on my view of technology criticism see What Motivates the Critic of Technology?

Jacques Ellul On Adaptation of Human Beings to the Technical Milieu

Jacques Ellul coined the term Technique in an attempt to capture the true nature of contemporary Western society. Ellul was a French sociologist and critic of technology who was active throughout the mid to late twentieth century. He was a prolific writer but is best remembered as the author of The Technological Society. You can read an excellent introduction to his thought here.

Ellul defined Technique (la technique) as “the totality of methods rationally arrived at and having absolute efficiency (for a given stage of development) in every field of human activity.” It was an expansive term meant to describe far more than what we ordinarily think of as technology, even when we use that term in the widest sense.

In a 1963 essay titled “The Technological Order,” Ellul referred to technique as “the new and specific milieu in which man is required to exist,” and he offered the six defining characteristics of this “new technical milieu”:

a. It is artificial;
b. It is autonomous with respect to values, ideas, and the state;
c. It is self-determining in a closed circle. Like nature, it is a closed organization which permits it to be self-determinative independently of all human intervention;
d. It grows according to a process which is causal but not directed to ends;
e. It is formed by an accumulation of means which have established primacy over ends;
f. All its parts are mutually implicated to such a degree that it is impossible to separate them or to settle any technical problem in isolation.

In the same essay, Ellul offers this dense elaboration of how Technique “comprises organizational and psychosociological techniques”:

It is useless to hope that the use of techniques of organization will succeed in compensating for the effects of techniques in general; or that the use of psycho-sociological techniques will assure mankind ascendancy over the technical phenomenon. In the former case we will doubtless succeed in averting certain technically induced crises, disorders, and serious social disequilibrations; but this will but confirm the fact that Technique constitutes a closed circle. In the latter case we will secure human psychic equilibrium in the technological milieu by avoiding the psychobiologic pathology resulting from the individual techniques taken singly and thereby attain a certain happiness. But these results will come about through the adaptation of human beings to the technical milieu. Psycho-sociological techniques result in the modification of men in order to render them happily subordinate to their new environment, and by no means imply any kind of human domination over Technique.”

That paragraph will bear re-reading and no small measure of unpacking, but here is the short version: Nudging is merely the calibration of the socio-biological machine into which we are being incorporated. Ditto life-hacking, mindfulness programs, and basically every app that offers to enhance your efficiency and productivity.

Ellul’s essay is included in Philosophy and Technology: Readings in the Philosophical Problems of Technology (1983), edited by Carl Mitcham and Robert Mackey.