The Shape of Our Tools, The Shape of Our Souls

When, a few weeks ago, I suggested that the dystopia is already here, it’s just not evenly distributed, I cited a story about the strange and disturbing world of children’s YouTube. Today, you can read more about YouTubers who made hundreds of thousands of dollars creating and posting these videos until YouTube started shutting them down.

There’s one line in this story to which I’ll draw your attention. One prominent “content creator” described his method this way:  “We learned to fuel it and do whatever it took to please the algorithm.”

I submit to you that this line is as good a slogan for our emerging social reality as any you’re likely to find: do whatever it takes to please the algorithm.

It reminded me of the bracingly honest and cheery slogan associated with the 1933 World’s Fair in Chicago: “Science Finds, Industry Applies, Man Conforms.” Or Carlyle’s complaint that people were becoming “mechanical in head and heart.”

It is true, of course, that we will bend to the shape of any number of external realities: natural, social, and technological.* To be human is to both shape and be shaped by the world we inhabit.

But what is the shape to which our tools and devices encourage us to conform?

Who or what do we seek to please by the way we use them?

Do they sustain our humanity or erode it?

These are the questions we do well to ask.


* It is also true that the boundaries between those categories are blurry.

A Note for Readers

A few months ago, I set up an account with Patreon, a platform that allows supporters to make monthly pledges to writers, artists, etc. I did so as part of my ongoing effort to figure out a way of making my way as what one might call an independent scholar. I wasn’t exactly making a killing with Patreon, but a few of you generous souls have seen fit over the last few months to pay the writer. I’ve been and remain deeply appreciative.

It turns out that recent changes to Patreon’s fee structure have made giving small amounts rather onerous for donors, creating quite the backlash from users. Given Patreon’s changes, and emails from readers about a few related frustrations with the service, I’ve decided to set up an alternative. Alongside the Patreon link, you’ll now find a link to a Paypal.me page. Unfortunately, this does not provide a way to set up recurring donations, but it is pretty easy to use and they take less from both of us, as far as I can make out. It also makes it easy to make a one time donation if, for example, you found a particular post especially helpful or brilliant or life-changing, etc.

For now, I’ll be leaving the Patreon account open. If you’re one of the happy few who have donated through Patreon, please do feel free to suspend your donations if you find it a poor use of your resources.

Also, I’m always grateful for feedback. I sometimes envision my work here as an attempt to occupy a space somewhere between academic and popular writing on technology: a little more accessible than the former and offering a bit more depth than the latter. Let me know how I’m doing. I’m curious, too, about which sorts of posts readers find useful or what you may want to see more of (or less of). You can find my email address on the About page.

Cheers!

Facebook Doesn’t Care About Your Children

Facebook is coming for your children.

Is that framing too stark? Maybe it’s not stark enough.

Facebook recently introduced Messenger Kids, a version of their Messenger app designed for six to twelve year olds. Antigone Davis, Facebook’s Public Policy Director and Global Head of Safety, wrote a blog post introducing Messenger Kids and assuring parents the app is safe for kids.

“We created an advisory board of experts,” Davis informs us. “With them, we are considering important questions like: Is there a ‘right age’ to introduce kids to the digital world? Is technology good for kids, or is it having adverse affects on their social skills and health? And perhaps most pressing of all: do we know the long-term effects of screen time?”

The very next line of Davis’s post reads, “Today we’re rolling out our US preview of Messenger Kids.”

Translation: We hired a bunch of people to ask important questions. We have no idea what the answers may be, but we built this app anyway.

Davis doesn’t even attempt to fudge an answer to those questions. She raises them and never comes back to them again. In fact, she explicitly acknowledges “we know there are still a lot of unanswered questions about the impact of specific technologies on children’s development.” But you know, whatever.

Naturally, we’re presented with statistics about the rates at which children under 13 use the Internet, Internet-enabled devices, and social media. It’s a case from presumed inevitability. Kids are going to be online whether you like it or not, so they might as well use our product. More about this in a moment.

We’re also told that parents are anxious about their kid’s safety online. Chiefly, this amounts to concerns about privacy or online predators. Valid concerns, of course, and Facebook promises to give parents control over their kids online activity. However, safety, in this sense, is not the only concern we should have. A perfectly safe technology may nonetheless have detrimental consequences for our intellectual, moral, and emotional well-being and for the well-being of society when the technology’s effects are widely dispersed.

Finally, we’re given five principles Facebook and its advisory board developed in order to guide the development of their suite of products for children. These are largely meaningless sentences composed of platitudes and buzzwords.

Let’s not forget that this is the same company that “offered advertisers the opportunity to target 6.4 million younger users, some only 14 years old, during moments of psychological vulnerability, such as when they felt ‘worthless,’ ‘insecure,’ ‘stressed,’ ‘defeated,’ ‘anxious,’ and like a ‘failure.'”

Facebook doesn’t care about your children. Facebook cares about your children’s data. As Wired reported, “The company will collect the content of children’s messages, photos they send, what features they use on the app, and information about the device they use.”

There are no ads on Messenger Kids the company is quick to point out. “For now,” I’m tempted to add. Barriers of this sort tend to erode over time. Moreover, even if the barrier holds, an end game remains.

“If they are weaned on Google and Facebook,” Jeffrey Chester, executive director for the Center of Digital Democracy, warns, “you have socialized them to use your service when they become an adult. On the one hand it’s diabolical and on the other hand it’s how corporations work.”

Facebook’s interest in producing an app for children appears to be a part of a larger trend. “Tech companies have made a much more aggressive push into targeting younger users,” the same Wired article noted, “a strategy that began in earnest in 2015 when Google launched YouTube Kids, which includes advertising.”

In truth, I think this is about more than just Facebook. It’s about thinking more carefully about how technology shapes our children and their experience. It is about refusing the rhetoric of inevitability and assuming responsibility.

Look, what if there is no safe way for seven-year-olds to use social media or even the Internet and Internet-enabled devices? I realize this may sound like head-in-the-ground overreaction, and maybe it is, but perhaps it’s worth contemplating the question.

I also realize I’m treading on sensitive ground here, and I want to proceed with care. The last thing over-worked, under-supported parents need is something more to feel guilty about. Let’s forget the guilt. We’re all trying to do our best. Let’s just think together about this stuff.

As adults, we’ve barely got a handle on the digital world. We know devices and apps and platforms are designed to capture and hold attention in a manner that is intellectually and emotionally unhealthy. We know that these design choices are not made with the user’s best interest in mind. We are only now beginning to recognize the personal and social costs of our uncritical embrace of constant connectivity and social media. How eager should we be to usher our children in to this reality?

The reality is upon them whether we like it or not, someone might counter. Maybe, but I don’t quite buy it. Even if it is, the degree to which this is the case will certainly vary based in large part upon the choices parents make and their resolve.

Part of our problem is that we think too narrowly about technology, almost always in terms of functionality and safety. With regards to children, this amounts to safeguarding against offensive content, against exploitation, and against would-be predators. Again, these are valid concerns, but they do not exhaust the range of questions we should be asking about how children relate to digital media and devices.

To be clear, this is not only about preventing “bad things” from happening. It is also a question of the good we want to pursue.

Our disordered relationship with technology is often a product of treating technology as an end rather than a means. Our default setting is to uncritically adopt and ask questions later if at all. We need, instead, to clearly discern the ends we want to pursue and evaluate technology accordingly, especially when it comes to our children because in this, as in so much else, they depend on us.

Some time ago, I put together a list of 41 questions to guide our thinking about the ethical dimensions of technology. These questions are a useful way of examining not only the technology we use but also the technology to which we introduce our children.

What ideals inform the choices we make when we raise children? What sort of person do we hope they will become? What habits do we desire for them cultivate? How do we want them to experience time and place? How do we hope they will perceive themselves? These are just a few of the questions we should be asking.

Your answers to these questions may not be mine or your neighbor’s, of course. The point is not that we should share these ideals, but that we recognize that the realization of these ideals, whatever they may be for you and for me, will depend, in greater measure than most of us realize, on the tools we put in our children’s hands. All that I’m advocating is that we think hard about this and proceed with great care and great courage. Great care because the stakes are high; great courage because merely by our determination to think critically about these matters we will be setting ourselves against powerful and pervasive forces.


If you’ve appreciated what you’ve read, consider supporting the writer.

Ivan Illich on Technology and Labor

From Ivan Ilich’s Tools for Conviviality (1973), a book that emerged out of conversations at The Center for Intercultural Documentation (CIDOC) in Cuernavaca, Mexico.

For a hundred years we have tried to make machines work for men and to school men for life in their service. Now it turns out that machines do not “work” and that people cannot be schooled for a life at the service of machines. The hypothesis on which the experiment was built must now be discarded. The hypothesis was that machines can replace slaves. The evidence shows that, used for this purpose, machines enslave men. Neither a dictatorial proletariat nor a leisure mass can escape the dominion of constantly expanding industrial tools.

The crisis can be solved only if we learn to invert the present deep structure of tools; if we give people tools that guarantee their right to work with high, independent efficiency, thus simultaneously eliminating the need for either slaves or masters and enhancing each person’s range of freedom. People need new tools to work with rather than tools that “work” for them. They need technology to make the most of the energy and imagination each has, rather than more well-programmed energy slaves.

[…]

As the power of machines increases, the role of persons more and more decreases to that of mere consumers.

[…]

This world-wide crisis of world-wide institutions can lead to a new consciousness about the nature of tools and to majority action for their control. If tools are not controlled politically, they will be managed in a belated technocratic response to disaster. Freedom and dignity will continue to dissolve into an unprecedented enslavement of man to his tools.

Illich is among the older thinkers whose work on technology and society I think remains instructive and stimulating. These considerations seem especially relevant to our debates about automation and employment.

Illich was wide-ranging in his interests. Early in the life of this blog, I frequently cited In the Vineyard of the Text, his study of the evolution of writing technologies in the late medieval period.

The Rhetorical “We” and the Ethics of Technology

“Questioning AI ethics does not make you a gloomy Luddite,” or so the title of a recent article in a London business newspaper assures us. The most important thing to be learned here is that someone feels this needs to be said. Beyond that, there is also something instructive about the concluding paragraphs. If we read them against the grain, these paragraphs teach us something about how difficult it is to bring ethics to bear on technology.

“I’m simply calling for us to use all the tools at our disposal to build a better digital future,” the author tells us.

In practice, this means never forgetting what makes us human. It means raising awareness and entering into dialogue about the issue of ethics in AI. It means using our imaginations to articulate visions for a future that’s appealing to us.

If we can decide on the type of society we’d like to create, and the type of existence we’d like to have, we can begin to forge a path there.

All in all, it’s essential that we become knowledgeable, active, and influential on AI in every small way we can. This starts with getting to grips with the subject matter and past extreme and sensationalised points of view. The decisions we collectively make today will influence many generations to come.

Here are the challenges:

We have no idea what makes us human. You may, but we don’t.

We have nowhere to conduct meaningful dialogue; we don’t even know how to have meaningful dialogue.

Our imaginations were long ago surrendered to technique.

We can’t decide on the type of society we’d like to create or the type of existence we’d like to have, chiefly because this “we” is rhetorical. It is abstract and amorphous.

There is no meaningful antecedent to the pronouns we and us used throughout these closing paragraphs. Ethics is a communal business. This is no less true with regards to technology, perhaps it is all the more true. There is, however, no we there.

As individuals, we are often powerless against larger forces dictating how we are to relate to technology. The state is in many respects beholden to the technological–ideologically, politically, economically. Regrettably, we have very few communities located between the individual and the state constituting a we that can meaningfully deliberate and effectively direct the use of technology.

Technologies like AI emerge and evolve in social spaces that are resistant to substantial ethical critique. They also operate at a scale that undermines the possibility of ethical judgment and responsibility. Moreover, our society is ordered in such a way that there is very little to be done about it, chiefly because of the absence of structures that would sustain and empower ethical reflection and practice, the absence, in other words, of a we that is not merely rhetorical.