Growing Up with AI

In an excerpt from her forthcoming bookWho Can You Trust? How Technology Brought Us Together and Why It Might Drive Us Apart, Rachel Botsman reflects on her three-year-old’s interactions with Amazon’s AI assistant, Alexa.

Botsman found that her daughter took quite readily to Alexa and was soon asking her all manner of questions and even asking Alexa to make choices for her, about what to wear, for instance, or what she should do that day. “Grace’s easy embrace of Alexa was slightly amusing but also alarming,” Botsman admits. “Today,” she adds, “we’re no longer trusting machines just to do something, but to decide what to do and when to do it.” She then goes on to observe that the next generation will grow up surrounded by AI agents, so that the question will not be “Should we trust robots?” but rather “Do we trust them too much?”

Along with issues of privacy and data gathering, Botsman was especially concerned with the intersection of AI technology and commercial interests: “Alexa, after all, is not ‘Alexa.’ She’s a corporate algorithm in a black box.”

To these concerns, philosopher Mark White, elaborating on Botsman’s reflections, adds the following:

Again, this would not be as much of a problem if the choices we cede to algorithms only dealt with songs and TV shows. But as Botsman’s story shows, the next generation may develop a degree of faith in the “wisdom” of technology that leads them to give up even more autonomy to machines, resulting in a decline in individual identity and authenticity as more and more decisions are left to other parties to make in interests that are not the person’s own—but may be very much in the interests of those programming and controlling the algorithms.

These concerns are worth taking into consideration. I’m ambivalent about framing a critique of technology in terms of authenticity, or even individual identity, but I’m not opposed to a conversation along these lines. Such a conversation at least encourages us to think a bit more deeply about the role of technology in shaping the sorts of people we are always in the process of becoming. This is, of course, especially true of children.

Our identity, however, does not emerge in pristine isolation from other human beings or independently from the fabric of our material culture, technologies included. That is not the ideal to which we should aim. Technology will unavoidably be part of our children’s lives and ours. But which technologies? Under what circumstances? For what purposes? With what consequences? These are some of the questions we should be asking.

Of an AI assistant that becomes part of a child’s taken-for-granted environment, other more specific questions also come to mind.

What conversations or interactions will the AI assistant displace?

How will it effect the development of a child’s imagination?

How will it direct a child’s attention?

How will a child’s language acquisition be effected?

What expectations will it create regarding the solicitude they can expect from the world?

How will their curiosity be shaped by what the AI assistant can and cannot answer?

Will the AI assistants undermine the development of critical cognitive skills by their ability to immediately respond to simple questions?

Will their communication and imaginative life shrink to the narrow parameters within which they can interact with AI?

Will parents be tempted to offload their care and attentiveness to the AI assistant, and with what consequences?

Of AI assistants generally, we might conclude that what they do well–answer simple direct questions, for example–may, in fact, prove harmful to a child’s development, and what they do poorly–provide for rich, complex engagement with the world–is what children need most.

We tend to bend ourselves to fit the shape of our tools. Even as tech-savvy adults we do this. It seems just as likely that children will do likewise. For this reason, we do well to think long and hard about the devices that we bring to bear upon their lives.

We make all sorts of judgements as a society about when it is appropriate for children to experience certain realities, and this care for children is one of the marks of a healthy society. We do this through laws, policy, and cultural norms. With regards to the norms that govern the technology that we introduce into our children’s lifeworld, we would do well, it seems to me, to adopt a more cautionary stance. Sometimes this means shielding children from certain technologies if it is not altogether obvious that their impact will be helpful and beneficial. We should, in other words, shift the burden of proof so that a technology must earn its place in our children’s lives.

Botsman finally concluded that her child was not ready for Alexa to be a part of her life and that it was possibly usurping her own role as parent:

Our kids are going to need to know where and when it is appropriate to put their trust in computer code alone. I watched Grace hand over her trust to Alexa quickly. There are few checks and balances to deter children from doing just that, not to mention very few tools to help them make informed decisions about A.I. advice. And isn’t helping Gracie learn how to make decisions about what to wear — and many more even important things in life — my job? I decided to retire Alexa to the closet.

It is even better when companies recognize some of these problem and decide (from mixed motives, I’m sure) to pull a device whose place in a child’s life is at best ambiguous.


This post is part of a series on being a parent in the digital age.

Finding A Place For Thought

Yesterday, I wrote briefly about how difficult it can be to find a place for thought when our attention, in both its mental and emotional dimensions, is set aimlessly adrift on the currents of digital media. Digital media, in fact, amounts to an environment that is inhospitable and, indeed, overtly hostile to thought.

Many within the tech industry are coming to a belated sense of responsibility for this world they helped fashion. A recent article in the Guardian tells their story. They include Justin Rosenstein, who helped design the “Like” button for Facebook but now realizes that it is common “for humans to develop things with the best of intentions and for them to have unintended, negative consequences” and James Williams, who worked on analytics for Google but who experienced an epiphany “when he noticed he was surrounded by technology that was inhibiting him from concentrating on the things he wanted to focus on.”

Better late than never one might say, or perhaps it is too late. As per usual, there is a bit of ancient wisdom that speaks to the situation. In this case, the story of Pandora’s Box comes to mind. Nonetheless, when so many in the industry seem bent on evading responsibility for the consequences of their work, it is mildly refreshing to read about some who are at least willing to own the consequences of their work and even striving to somehow make ammends.

It is telling, though, that, as the article observes, “These refuseniks are rarely founders or chief executives, who have little incentive to deviate from the mantra that their companies are making the world a better place. Instead, they tend to have worked a rung or two down the corporate ladder: designers, engineers and product managers who, like Rosenstein, several years ago put in place the building blocks of a digital world from which they are now trying to disentangle themselves.”

Tristan Harris, formerly at Google, has been especially pointed in his criticism of the tech industries penchant for addictive design. Perhaps the most instructive part of Harris’s story is how he experienced a promotion to ethics position within Google as, in effect, a marginalization and silencing.

(It is also edifying to consider the steady drumbeat of stories about how tech executives stringently monitor and limit the access their own children have to devices and the Internet and why they send their children to expensive low tech schools.)

Informed as my own thinking has been by the work of Hannah Arendt, I see this hostility to thought as a serious threat to our society. Arendt believed that thinking was somehow intimately related to our moral judgment and an inability to think a gateway to grave evils. Of course, it was a particular kind of thinking that Arendt had in mind–thinking, one might say, for thinking’s sake. Or, thinking that was devoid of instrumentality.

Writing in Aeon recently, Jennifer Stitt drew on Arendt to argue for the importance of solitude for thought and thought for conscience and conscience for politics. As Stitt notes, Arendt believed that “living together with others begins with living together with oneself.” Here is Stitt’s concluding paragraph:

But, Arendt reminds us, if we lose our capacity for solitude, our ability to be alone with ourselves, then we lose our very ability to think. We risk getting caught up in the crowd. We risk being ‘swept away’, as she put it, ‘by what everybody else does and believes in’ – no longer able, in the cage of thoughtless conformity, to distinguish ‘right from wrong, beautiful from ugly’. Solitude is not only a state of mind essential to the development of an individual’s consciousness – and conscience – but also a practice that prepares one for participation in social and political life.

Solitude, then, is at least one practice that can help create a place for thought.

Paradoxically, in a connected world it is challenging to find either solitude or companionship. If we submit to a regime of constant connectivity, we end up with hybrid versions of both, versions which fail to yield their full satisfactions.

Additionally, as someone who works one and a half jobs and is also raising a toddler and an infant, I understand how hard it can be to find anything approaching solitude. In a real sense it is a luxury, but it is a necessary luxury and if the world won’t offer it freely then we must fight for it as best we can.

There was one thing left in Pandora’s Box after all the evils had flown irreversibly into the world: it was hope.

No Place for Thought

A few days ago Nathan Jurgenson tweeted a short thread commenting on what Twitter has become. “[I]t feels weird to tweet about things that arent news lately,” Jurgenson noted. A sociologist with an interest in social media and identity, he found that it felt rude to tweet about his interests when his feed seemed only concerned with the latest political news. About this tendency Jurgenson wisely observed, “following the news all day is the opposite of being more informed and it certainly isnt a kind ‘resistance.'”

These observations resonated with me. I’ve had a similar experience when logging in to Twitter, only to find that Twitter is fixated on the political (pseudo-)event of the moment. In those moments it seems if not rude then at least quixotic to link to a post that is not at all related to what everyone happens to be talking about. Sometimes, of course, it is not only political news that has this effect, it is also the all to frequent tragedy that can consume Twitter’s collective attention or the frivolous faux-controversy, etc.

In moments like these Twitter — and the same is true to some degree of other platforms — demands something of us, but it is not thought. It demands a reaction, one that is swift, emotionally charged, and in keeping with the affective tenor of the platform. In many respects, this entails not only an absence of thought but conditions that are overtly hostile to thought.

Even apart from crisis, controversies, and tragedies, however, the effect is consistent: the focus is inexorably on the fleeting present. The past has no hold, the future does not come into play. Our time is now, our place is everywhere. Of course, social media has only heightened a tendency critics have noted since at least Kierkegaard’s time. To be well-informed, meaning up with current events, undermines the possibility of serious thinking, mature emotional responses, sound judgment, and wise action.

It is important, in my view, to make clear that this is not merely a problem of information overload. If it were only information we were dealing with, then we might be better able to recognize the nature of the problem and act to correct it. It is also, as I’ve noted briefly before, an affect overload problem. It is the emotional register that accounts for the Pavlovian alacrity with which we attend to our devices and the digital flows for which they are a portal. These devices, then, are, in effect, Skinner boxes we willingly inhabit that condition our cognitive and emotional lives. Twitter says “feel this,” we say “how intensely?” Social media never invites us to step away, to think and reflect, to remain silent, to refuse a response for now or may be indefinitely.

Under these circumstances, there is no place for thought.

For the sake of the world, thought must, at least for a time, take leave of the world, especially the world mediated to us by social media. We must, in other words, by deliberate action, make a place for thought.

Machines for the Evasion of Moral Responsibility

The title of a recent article by Virginia Heffernan in Wired asked, “Who Will Take Responsibility for Facebook?”

The answer, of course, is that no one will. Our technological systems, by nature of their design and the ideology that sustains them, are machines for the evasion of moral responsibility.

Heffernan focused on Facebook’s role in spreading misinformation during the last election, which has recently come to fuller and more damning light. Not long afterwards, in an post titled “Google and Facebook Failed Us,” Alexis Madrigal explored how misinformation about the Las Vegas shooting spread on both Google and Facebook. Castigating both companies for their failure to take responsibility for the results their algorithms generated, Madrigal concluded: “There’s no hiding behind algorithms anymore. The problems cannot be minimized. The machines have shown they are not up to the task of dealing with rare, breaking news events, and it is unlikely that they will be in the near future. More humans must be added to the decision-making process, and the sooner the better.”

Writing on the same topic, William Turton noted of Google that “the company’s statement cast responsibility on an algorithm as if it were an autonomous force.” “It’s not about the algorithm,” he adds. “It’s not about what the algorithm was supposed to do, except that it went off and did a bad thing instead. Google’s business lives and dies by these things we call algorithms; getting this stuff right is its one job.”

Siva Vaidhyanathan, a scholar at UVA whose book on Facebook, Anti-Social Media, is to be released next year, described his impression of Zuckerberg to Hefferman in this way: “He lacks an appreciation for nuance, complexity, contingency, or even difficulty. He lacks a historical sense of the horrible things that humans are capable of doing to each other and the planet.”

This leads Heffernan to conclude the following:  “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability [….] It’s hard to imagine he will submit to truth and reconciliation, or use Facebook’s humiliation as a chance to reconsider its place in the world. Instead, he will likely keep lawyering up and gun it on denial and optics, as he has during past ­litigation and conflict.”

This is an arresting observation: “Zuckerberg may just lack the moral framework to recognize the scope of his failures and his culpability.” Frankly, I suspect Zuckerberg is not the only one among our technologists who fits this description.

It immediately reminded me of Hannah Arendt’s efforts to understand the unique evils of mid-twentieth century totalitarianism, specifically the evil of the Holocaust. Thoughtlessness, or better an inability to think, Arendt believed, was near the root of this new kind of evil. Arendt insisted “absence of thought is not stupidity; it can be found in highly intelligent people, and a wicked heart is not its cause; it is probably the other way round, that wickedness may be caused by absence of thought.”

I should immediately make clear that I do not mean to equate Facebook’s and Google’s very serious failures with the Holocaust. This is not at all my point. Rather, it is that, following Arendt’s analysis, we can see more clearly how a certain inability to think (not merely calculate or problem solve) and consequently to assume moral responsibility for one’s actions, takes hold and yields a troubling and pernicious species of ethical and moral failures.

It is one thing to expose and judge individuals whose actions are consciously intended to cause harm and work against the public good. It is another thing altogether to encounter individuals who, while clearly committing such acts, are, in fact, themselves oblivious to the true nature of their actions. They are so enclosed within an ideological shell that they seem unable to see what they are doing, much less assume responsibility for it.

It would seem that whatever else we may say about algorithms as technical entities, they also function as the symbolic base of an ideology that abets thoughtlessness and facilitates the evasion of responsibility. As such, however, they are just a new iteration of the moral myopia that many of our best tech critics have been warning us about for a very long time (see here and here).


Two years ago, I published a post, Resisting the Habits of the Algorithmic Mind, in which I explored this topic of thoughtlessness and moral responsibility. I think it remains a useful way of making sense of the peculiar contours of our contemporary moral topography.


Tip the Writer

$1.00

 

 

 

Idols of Silicon and Data

In 2015, former Google and Uber engineer, Anthony Levandowski, founded a nonprofit called Way of the Future in order to develop an AI god and promote its worship. The mission statement reads as follows: “To develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society.”

A few loosely interconnected observations follow.

First, I would suggest that Levandowski’s mission only makes explicit what is often our implicit relationship to technology. Technology is a god to us, albeit a “god that limps,” in Colin Norman’s arresting image drawn from Greek mythology’s lame, metal-working god, Hephaestus. We trust ourselves to it, assign to it salvific powers, uncritically answer its directives, and hang our hope on it. But in its role as functional deity it inevitably disappoints.

Second, it seems to me that the project should be taken seriously only as an explicit signifier of implicit and tacit realities. However, as Levandowski’s project makes clear, quasi-religious techno-fantasies are often embraced by well-placed and influential technologists. To some degree, then, it would seem that the development of technology, its funding and direction, is driven by these motives and they should not be altogether ignored.

Third, the article discussing Levandowski’s project also leans on some comments from historian Yuval Hariri, who has developed something of reputation for grand claims about the future of humanity. According to the article, “history tells us that new technologies and scientific discoveries have continually shaped religion, killing old gods and giving birth to new ones.” He then quotes Harari:

That is why agricultural deities were different from hunter-gatherer spirits, why factory hands and peasants fantasised about different paradises, and why the revolutionary technologies of the 21st century are far more likely to spawn unprecedented religious movements than to revive medieval creeds.

Both miss the reciprocal relationship between society and technology, and specifically between religion in the West and technology. It is not only a matter of technology impacting and effecting religion, it is also a matter of religion infusing and informing the development of technology. I’ve cited it numerous times before, but it’s worth mentioning again that David Noble’s The Religion of Technology is wonderful place to start in order to understand this dynamic. According to Noble, “literally and historically,” modern technology and religion have co-evolved and, consequently, “the technological enterprise has been and remains suffused with religious belief.” We fail to understand technology in the West if we do not understand this socio-religious dimension.

Fourth, the author goes on to cite advocates of transhumanism. Transhumanism is usefully construed as a Christian heresy, or, if you prefer, (post-)Christian fan-fiction. I elaborate on that claim toward the end of this post.

Fifth, some of our most incisive tech critics have been people of religious conviction. Jacques Ellul, Marshall McLuhan, Ivan Illich, Walter Ong, Albert Borgmann, Wendell Berry, and Bill McKibben come to mind. Neil Postman, who was not a religious person as far as I know, nonetheless attributed his critical interest in media to a reading of the second commandment. The tech critic of religious faith has at least one thing going for them: their conviction that there is one God and technology is not it.* Valuing technology for more than it is, then, appears as a species of idolatry.

Sixth, all god-talk in relation to technology takes for granted some understanding of God, religion, faith, etc. It is often worth excavating the nature of these assumptions because they are usually doing important conceptual work in whatever argument or claim they are embedded.

Seventh, in the article transhumanist philosopher Zoltan Istvan claims of the potential AI deity, that “this God will actually exist and hopefully will do things for us.” The religion of technology is ultimately about power: human power over nature and, finally, the power of some humans over others.

Pieter Brueghel, Construction of the Tower of Babel (1563)

___________________________

*Clearly, I principally have in view the monotheistic religious traditions.