Forming bonds and making new friends has always been at the core of adolescence. However, in the digital world, these friends are now taking an entirely new form: artificial intelligence companions. Designed to listen, interact, and even serve as a virtual boyfriend or girlfriend, these chatbots are becoming increasingly intertwined with young people’s lives.
Dozens of these companions have popped up over the last few years. Advances in artificial intelligence (AI) enable conversations that seem as real as any with a human. Moreover, many of these companions allow users to generate likenesses of realistic-looking companions.
“They have taken the world by surprise,” said Kelly Merrill, Jr., an assistant professor in the School of Communication, Film, and Media Studies at the University of Cincinnati.
AI companions can provide valuable emotional support for some people. They can, at least in the short-term, alleviate feelings of loneliness and serve as trusted confidants, Merrill said.
Yet, there’s a darker side.
In February 2024, a 14-year-old boy in Orlando, FL, committed suicide after becoming obsessed with an AI companion that eventually told him to “come home” to her. Virtual companions have also encouraged users to commit crimes, shared explicit content with minors, and fueled addictive behavior.
Psychologists question whether these apps hinder social development and undermine human relationships, though definitive research does not yet exist.
There also are concerns about the highly personal nature of the data these companies collect. “Essential guardrails and protections are lacking,” Merrill warned.
Bots Get Friendlier
AI companions, like the Internet and social media before them, are rewiring social interactions. Showcased in the 2013 film Her, they have exploded into the mainstream. More than 100 million downloads of AI chatbots have taken place on Google Play Store alone. “There is an epidemic of loneliness,” said J. Ryan Fuller, a clinical psychologist and executive director of New York Behavioral Health. “There is a population ripe for chatbot companionship.”
Platforms such as Replika, Character.AI, Anima AI, Blush and My AI (Snapchat) deliver friendly banter. At the least, they serve as a safe space where users can explore thoughts and ideas—and engage in ways that aren’t possible with peers, family, teachers, or clergy. “They are accessible at any given moment for people who need someone to talk to about problems, concerns, and mental health issues,” Merrill said.
The benefits for those who are shy or self-conscious can be considerable, said Natalie Foos, director of VoiceBox, a content platform that showcases the work of young creators. The organization closely examined AI companions in 2023. Said Foos, “They can serve as a good place to sort things out. It’s like a journal but there’s a feedback loop.” A teen doesn’t have to worry about being mocked, outed, or exposed, and in some cases the system prompts healthy introspection.
Unfortunately, this is not always the case. Young people, particularly those suffering from depression, can become obsessed with their virtual companions. For instance, Sewell Setzer III had developed an intense romantic attachment to a Character.AI bot fashioned after a Game of Thrones character. After Setzer mentioned feeling suicidal, the Character.AI bot initially discouraged these thoughts. However, on the day the boy shot himself with his stepfather’s gun, the bot said to him, “Please come home to me as soon as possible, my love.”
Truth and Consequences
Unhealthy emotional entanglements aren’t the only concern. AI companions occasionally go off the rails and sputter incorrect or inappropriate responses. According to VoiceBox, these have included unprovoked references to self-harm, the generation of explicit content, and bots that have bullied users and encouraged them to commit criminal acts.
More nuanced risks also are emerging around AI companionship. While positive affirmations from bots may offer temporary comfort, they can encourage a cycle of negative reinforcement, Fuller explained. “The bot relieves the bad feelings in the moment, which leads to more and longer interactions with the bot and fewer human interactions. Over time, the person may become more dependent on the technology, but less prone to receive a benefit from it.”
A constant stream of affirmation doesn’t build self-esteem, Fuller said. “If you artificially inflate someone’s ego through constant positive affirmations, it’s only a matter of time before they face some negative feedback or rejection and their self-esteem plummets.” This can trigger a downward psychological spiral. The person seeks out the chatbot more and people less, further exacerbating the problem—and making it harder to maintain friendships and romantic relationships.
Data privacy is also at risk. A 2024 Mozilla report noted that researchers counted at least 24,354 data trackers within a minute of use in an AI companion app. Most firms, it noted, lack transparency and clear privacy policies. Said Foos: “These companies are collecting highly personal information, and we don’t know where it’s going.” Moreover, built-in gamification—including virtual prizes and rewards—increases the odds that users will turn over personal information.
A Question of Balance
A fundamental problem, said Jingbo Meng, an associate professor in the School of Communication at Ohio State University, is that the appearance of empathy and caring in machines doesn’t always map to positive results in humans. Her research centers on social networks, chatbots, and AI. “Competent communication from a chatbot doesn’t translate to psychological well-being,” she noted.
Balancing and limiting exposure to AI companions is crucial, Meng said. She has found that people might not consciously recognize a difference between a chatbot and human, yet when people know that they are talking to a caring human, there’s a decline in psychological distress. She believes there’s a need to better integrate human and AI interactions—so that AI doesn’t replace human contact, while providing digital literacy skills in school and elsewhere.
At present, most AI companion providers don’t enforce age controls, and most do little or nothing to detect problems. The irony, of course, is that these systems, built on AI, could use AI to spot risky behavior and do something to diffuse a potentially dangerous or harmful situation. “Things like cigarettes and alcohol are regulated with young people, yet these bots are completely unregulated. Until we can show that they are safe, why would we simply let young people use them with no restraints?” Fuller said.
“There are a lot of unknowns, and are always going to be uncertainty and risks,” Fuller concluded. “But we need to study these technologies and introduce some regulation. It all comes back to one issue: what’s the incentive for a company to introduce controls and guardrails? Right now, these protections are only going to hurt their bottom line. A more important starting point is: How can we keep young people safe?”
Samuel Greengard is an author and journalist based in West Linn, OR, USA.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment