Growing Up with AI

In an excerpt from her forthcoming bookWho Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart, Rachel Botsman reflects on her three-year-old’s interactions with Amazon’s AI assistant, Alexa.

Botsman found that her daughter took quite readily to Alexa and was soon asking her all manner of questions and even asking Alexa to make choices for her, about what to wear, for instance, or what she should do that day. “Grace’s easy embrace of Alexa was slightly amusing but also alarming,” Botsman admits. “Today,” she adds, “we’re no longer trusting machines just to do something, but to decide what to do and when to do it.” She then goes on to observe that the next generation will grow up surrounded by AI agents, so that the question will not be “Should we trust robots?” but rather “Do we trust them too much?”

Along with issues of privacy and data gathering, Botsman was especially concerned with the intersection of AI technology and commercial interests: “Alexa, after all, is not ‘Alexa.’ She’s a corporate algorithm in a black box.”

To these concerns, philosopher Mark White, elaborating on Botsman’s reflections, adds the following:

Again, this would not be as much of a problem if the choices we cede to algorithms only dealt with songs and TV shows. But as Botsman’s story shows, the next generation may develop a degree of faith in the “wisdom” of technology that leads them to give up even more autonomy to machines, resulting in a decline in individual identity and authenticity as more and more decisions are left to other parties to make in interests that are not the person’s own—but may be very much in the interests of those programming and controlling the algorithms.

These concerns are worth taking into consideration. I’m ambivalent about framing a critique of technology in terms of authenticity, or even individual identity, but I’m not opposed to a conversation along these lines. Such a conversation at least encourages us to think a bit more deeply about the role of technology in shaping the sorts of people we are always in the process of becoming. This is, of course, especially true of children.

Our identity, however, does not emerge in pristine isolation from other human beings or independently from the fabric of our material culture, technologies included. That is not the ideal to which we should aim. Technology will unavoidably be part of our children’s lives and ours. But which technologies? Under what circumstances? For what purposes? With what consequences? These are some of the questions we should be asking.

Of an AI assistant that becomes part of a child’s taken-for-granted environment, other more specific questions also come to mind.

What conversations or interactions will the AI assistant displace?

How will it effect the development of a child’s imagination?

How will it direct a child’s attention?

How will a child’s language acquisition be effected?

What expectations will it create regarding the solicitude they can expect from the world?

How will their curiosity be shaped by what the AI assistant can and cannot answer?

Will the AI assistants undermine the development of critical cognitive skills by their ability to immediately respond to simple questions?

Will their communication and imaginative life shrink to the narrow parameters within which they can interact with AI?

Will parents be tempted to offload their care and attentiveness to the AI assistant, and with what consequences?

Of AI assistants generally, we might conclude that what they do well–answer simple direct questions, for example–may, in fact, prove harmful to a child’s development, and what they do poorly–provide for rich, complex engagement with the world–is what children need most.

We tend to bend ourselves to fit the shape of our tools. Even as tech-savvy adults we do this. It seems just as likely that children will do likewise. For this reason, we do well to think long and hard about the devices that we bring to bear upon their lives.

We make all sorts of judgements as a society about when it is appropriate for children to experience certain realities, and this care for children is one of the marks of a healthy society. We do this through laws, policy, and cultural norms. With regards to the norms that govern the technology that we introduce into our children’s lifeworld, we would do well, it seems to me, to adopt a more cautionary stance. Sometimes this means shielding children from certain technologies if it is not altogether obvious that their impact will be helpful and beneficial. We should, in other words, shift the burden of proof so that a technology must earn its place in our children’s lives.

Botsman finally concluded that her child was not ready for Alexa to be a part of her life and that it was possibly usurping her own role as parent:

Our kids are going to need to know where and when it is appropriate to put their trust in computer code alone. I watched Grace hand over her trust to Alexa quickly. There are few checks and balances to deter children from doing just that, not to mention very few tools to help them make informed decisions about A.I. advice. And isn’t helping Gracie learn how to make decisions about what to wear — and many more even important things in life — my job? I decided to retire Alexa to the closet.

It is even better when companies recognize some of these problem and decide (from mixed motives, I’m sure) to pull a device whose place in a child’s life is at best ambiguous.

This post is part of a series on being a parent in the digital age.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s