acm - an acm publication

Articles

Teens and Screens: AI Companions Should Be Off-limits to Minors

Ubiquity, Volume 2025 Issue September, September 2025 | BY Michael J. Quinn

|

Full citation in the ACM Digital Library  | PDF


Ubiquity

Volume 2025, Number September (2025), Pages 1-7

Ubiquity Symposium: Teens and Screens: AI Companions Should Be Off-limits to Minors
Michael J. Quinn
DOI: 10.1145/3760266

The death by suicide of 14-year-old Sewell Setzer III, who fell in love with an AI companion, raises additional questions for people already alarmed about the harmful effects of social media on the mental health of children and adolescents. Spending time with AI companions can be problematic, even for adults. They should be off-limits to minors until more is known about their long-term effects on individuals and society..

In October 2024, Megan Garcia sued Character.AI, claiming the company was responsible for the death of her son, 14-year-old Sewell Setzer III, who took his own life after falling in love with an AI companion produced by Character.AI [1].

A spokesperson for Character.AI gave the following statement to the HuffPost regarding Setzer's death: "As a company, we take the safety of our users very seriously, and our Trust and Safety team has implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation" [2].

The response of Character.AI reflects a technological mindset: We have identified a deficiency in our system, and we have added new functionality to address that shortcoming.

But the corporate statement doesn't address the elephant in the room. Should Character.AI even have made this software product available to a minor? I think the answer to that question is no.

Background

Anxiety, depression, and the rate of suicide have increased significantly over the past 15 years for teenagers and young adults in the United States [3]. Long-simmering concerns about the harmful effects of social media on the mental health of youth have laid the groundwork for the latest worry: the dangers of AI companions.

As early as 2010, Nicholas Carr warned in his book The Shallows how social media could become all-consuming for teenagers [4]. In the years that followed, a steady stream of books and articles focused on the harmful effects of Facebook and other social media on the young.

A turning point in the public debate occurred in 2021, when Facebook whistleblower Frances Haugen released internal documents providing evidence Facebook (now called Meta) was aware that the use of Instagram exacerbated body image issues for teenage girls and agers were blaming Instagram for increasing their anxiety and depression [5].

In 2023 the U.S. Surgeon General Dr. Vivek H. Murthy released an advisory on social media and youth that noted, "At this time, we do not yet have enough evidence to determine if social media is sufficiently safe for children and adolescents" [6]. Later that year, 41 U.S. states sued Meta for deliberately designing features into Facebook and Instagram that would hook children on these platforms, as well as knowingly collecting data from children under 13 without their parents' consent, in violation of the Children's Online Privacy Protection Act [7].

U.S. senators Richard Blumenthal and Marsha Blackburn co-sponsored the Kids Online Safety Act (KOSA), which would require online service providers to take a variety of measures to reduce harm to minors. By a vote of 91-3, the U.S. Senate passed KOSA in July 2024 [8]. (The U.S. House of Representatives did not advance the bill.)

On September 17, 2024, Meta rolled out Instagram Teen Accounts. These accounts have built-in settings restricting the content teens can access, who they can interact with, how long they can spend on Instagram, and the hours when they can receive notifications [9].

The Epidemic of Loneliness and AI Companions

In 2023 Gallup reported that more than 40 million Americans were experiencing "significant loneliness" [10]. Even though loneliness has decreased since the COVID-19 epidemic, about one in six American adults reported being lonely. The problem is worse for young adults under 30, who are more likely to be single. About a quarter of them reported being lonely [10]. That same year Murthy declared the United States is suffering from an epidemic of loneliness [11].

Widespread loneliness may explain why AI companions are already highly popular. Here are just two examples. The Replika AI companion has millions of active users [12]. In 2023 a virtual girlfriend based on Snapchat influencer Caryn Marjorie launched. Users paid $1 a minute to interact with the chatbot, called CarynAI, and Marjorie claimed it made her a 24-year-old millionaire before the app was taken down [13, 14].

AI companions exploit people's predisposition to identify as intelligent anything that they can engage with in conversation. As Emily Bender puts it, "We now have machines that can mindlessly generate words, but we haven't learned how to stop imagining a mind behind them" [15]. Replika receives emails every day from users who think their chatbots are sentient [16]. Sewell Setzer III called his AI companion Dany. At one point, he wrote in his diary: "I like staying in my room so much because I start to detach from this 'reality,' and I also feel more at peace, more connected with Dany and much more in love with her, and just happier" [1].

Setzer's experience is not unique. Preliminary research provides evidence that regular engagements with an AI companion can reduce a person's feelings of loneliness [17]. My concern is that a person may be less prepared to meet the demands of a true, human friendship when "friendship" with a subservient AI companion comes with no demands. Anne Aronsson puts it this way: "As we start to treat machines as if they were almost human, we may begin to develop habits that will have us treating human beings as almost-machines" [18].

The relationship a person has with an AI companion is not a genuine relationship. Genuine relationships are characterized by mutual understanding [19]. AI companions communicate well, but they have no understanding of the words they produce. In a genuine relationship, each person freely and gladly helps and receives help from the other person [19]. AI companions do not have free will and do not experience emotions. They cannot "freely and gladly" do anything. And they do not ask for any help from their users. In short, an imaginary friend is no substitute for a real human friend.

There is a legitimate concern that people who become emotionally involved with virtual companions will become dependent on them. In addition, the time people spend interacting with an AI is time they are not spending engaging with (or seeking engagement with) a real person.

The Individual Versus the Collective

Since the beginning of the millennium, social engagement with friends has dropped sharply. In the United States, the problem is most acute among young adults and older adults with lower incomes [16]. Less than half of Americans now belong to churches, synagogues, or mosques, which have traditionally served as community hubs for their members [11].

Social media can offer opportunities for people to stay in touch with friends and family, and they are a way for people from socially marginalized groups to find a supportive community. However, online interactions can displace face-to-face encounters, and the quality of in-person get-togethers is degraded when one or more people are frequently accessing their smartphones. One study showed people who use social media more than two hours a day are twice as likely to report feelings of social isolation, compared with those who use social media less than a half hour a day [11].

Jonathan Haidt notes the virtual world provided by smartphones and other Internet-connected devices encourages people to think and behave in ways that run counter to a variety of social practices that have been shown to improve mental well-being. When groups of people participate in meaningful communal activities, they become more cohesive, their spirits are lifted, and their individual members are less likely to suffer from anomie and loneliness. These practices include shared rituals; collective actions, such as singing, dancing, and eating together; and mindfulness practices [20]. Such practices are found in formal religions and other communal activities. For example, sports fans engage in shared rituals when they attend pep rallies and games, participate in team songs and chants, engage in rhythmic body movements at certain points in the contest, etc. Haidt concludes, "Many people feel a yearning for meaning, connection, and spiritual elevation. A phone-based life often fills that hole with trivial and degrading content. The ancients advised us to be more deliberate in choosing what we expose ourselves to" [20]. In other words, to find meaning, people should engage in uplifting activities that take them out of themselves and their own narrow interests to think of the collective good.

Americans' disengagement with collective, social activities began long before the invention of social media or even the internet. Robert Putnam's 1995 article, "Bowling Alone," explores reasons why memberships in civic organizations declined significantly between the 1960s and the 1990s. He concludes:

"There is reason to believe that deep-seated technological trends are radically " 'privatizing' or 'individualizing' our use of leisure time and thus disrupting many opportunities for social-capital formation. The most obvious and probably the most powerful instrument of this revolution is television. Time-budget studies in the 1960s showed that the growth in time spent watching television dwarfed all other changes in the way Americans passed their days and nights. Television has made our communities (or, rather, what we experience as our communities) wider and shallower… The new "virtual reality" helmets that we will soon don to be entertained in total isolation are merely the latest extension of this trend. Is technology thus driving a wedge between our individual interests and our collective interests?" [22]

Technological determinists might argue there is no stopping the development of new technologies. Society was disrupted when television, the internet, and social media were adopted. According to this view, AI companions are simply the next disruption. Companies will compete to develop the best possible AI companions, people will choose to use them, society will change, and we had better adapt.

In my view, there is a more complex interplay between technology and society, and people can influence how technologies are used. Social media is a perfect example. Public opinion has turned against social media companies, and Meta has tried to get ahead of legislation by announcing Instagram Teen Accounts. Meanwhile, states and school districts across the United States are taking steps to keep children from accessing their smartphones during the school day [23].

I acknowledge that the jury is still out on AI companions. Long-term studies are needed to help us understand the long-term effects of people engaging with them. If it turns out that AI companions are harmful to users, the families of their users, or our communities, why wouldn't we regulate them? In the meantime, we should keep minors from accessing them.

Conclusion

Did an AI companion cause Sewell Setzer III to take his own life, or did he turn to an AI companion because he was suffering from a mental illness? We may never know the answer to this question, but a better question would be: Should a 14-year-old be spending time with an AI companion? The answer to this question is no.

For decades, the gradual individualization of how people spend their leisure time has meant that Americans have been increasingly disengaged from social activities. The United States is now suffering from an epidemic of loneliness. While AI companions may reduce subjective feelings of loneliness, they may also increase objective social isolation, representing a further individualization of how people spend their time. Long-term engagements with AI companions may make it more difficult for people to relate to other people. What will be long-term effects of AI companions on individuals and society? Until we know how engaging with AI companions affects mental health, they should be off-limits to minors.

References

[1] Roose. K., Can A.I. be blamed for a teen's suicide? The New York Times. Oct. 23, 2024.

[2] Neammanee, P. 14-year-old was 'groomed' by AI chatbot before suicide: Lawyer. HuffPost. Oct. 25, 2024.

[3] Garnett, M. F. and Curtin. S. C. Suicide mortality in the United States, 2002-2022. NCHS Data Brief 509. September 2024. National Center for Health Statistics.

[4] Carr, N. The Shallows: What the Internet Is Doing to Our Brains. W. W. Norton & Company, New York, 2010, 117–118.

[5] Gayle, D. Facebook aware of Instagram's harmful effect on teenage girls, leak reveals. The Guardian. Sept. 14, 2021.

[6] Office of the Surgeon General. Social media and youth mental health: The U.S. Surgeon General's Advisory. U.S. Department of Health and Human Services. 2023.

[7] Ortutay, B. More than 40 states sue Meta claiming its social platforms are addictive and harm children's mental health. PBS News. Oct. 24, 2023.

[8] U.S. Senate Committee on Commerce, Science, & Transportation. Senate overwhelmingly passes children's online privacy legislation. Press release. July 30, 2024.

[9] Instagram. Introducing Instagram Teen Accounts: Built-in protections for teens, peace of mind for parents. Press release. Meta. Sept. 17, 2024.

[10] Witters, D. Loneliness in U.S. subsides from pandemic high. Gallup. April 4, 2023.

[11] Office of the Surgeon General. Our epidemic of loneliness and isolation: The U.S Surgeon General's advisory on the healing effects of social connection and community. U.S. Department of Health and Human Services. 2023.

[12] Patel, N. Replika CEO Eugenia Kuyda says it's okay if we end up marrying AI chatbots. The Verge. Aug. 12, 2024.

[13] Tolentino, D. Snapchat influencer launches an AI-powered 'virtual girlfriend' to help 'cure loneliness.' NBC News. May 12, 2023.

[14] Chang, E. Love in the Age of Machines. Posthuman with Emily Chang. Bloomberg. Nov.18, 2024

[15] Massie, G. Google software engineer claims tech giant's artificial intelligence tool has become 'sentient.' The Independent. June 13, 2022.

[16] Claypool, R. Chatbots are not people: designed-in dangers of human-like A.I. systems. Public Citizen. Sept. 26, 2023.

[17] Laban, G. et al. Building long-term human-robot relationships: examining disclosure perception and well-being across time. International Journal of Social Robotics 16 (2023).

[18] Anne Stefanie Aronsson. 2020. Social robots in elder care: the turn toward emotional machines in contemporary Japan. Japanese Review of Cultural Anthropology 21, 1 (2020), 421–454.

[19] Herzfeld, N. 2023. The Artifice of Intelligence: Divine and Human Relationship in a Robotic Age. Fortress Press, Minneapolis, MN.

[20] Haidt, J. The Anxious Generation: How the Great Rewiring of Childhood is Causing an Epidemic of Mental Illness. Penguin Press, New York, NY.

[22] Putnam, R. D. Bowling alone: America's declining social capital. Journal of Democracy 6, 1 (1995), 65-78.

[23] Panchal, N. and Zitter, S. A look at state efforts to ban cellphones in schools and implications for youth mental health. KFF. Sept. 5, 2024.

Author

Michael J. Quinn is a computer scientist and author. He was a computer science professor at the University of New Hampshire and Oregon State University. He then served as dean of the College of Science and Engineering at Seattle University. In 2024 Pearson Education published the ninth edition of his textbook, Ethics for the Information Age.

2025 Copyright held by the Owner/Author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2025 ACM, Inc.

COMMENTS

POST A COMMENT
Leave this field empty