FTC investigating AI âcompanionâ chatbots amid growing concern about harm to kids
cnn_businessđ Article Content
EDITORâS NOTE:Help is available if you or someone you know is struggling with suicidal thoughts or mental health matters. In the US: Call or text 988, the Suicide & Crisis Lifeline. Globally: TheInternational Association for Suicide PreventionandBefrienders Worldwidehave contact information for crisis centers. The Federal Trade Commission has launched an investigation into seven tech companies around potential harms their artificial intelligence chatbots could cause to children and teenagers. The inquiry focuses on AI chatbots that can serve as companions, which âeffectively mimic human characteristics, emotions, and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots,â the agency said in astatementThursday. The FTC sent order letters to Google parent company Alphabet; Character.AI; Instagram and its parent company, Meta; OpenAI; Snap; and Elon Muskâs xAI. The agency wantsinformation about whether and how the firms measure the impact of their chatbots on young users and how they protect against and alert parents to potential risks. The investigation comes amid rising concern around AI use by children and teens, following a string of lawsuits and reports accusing chatbots of being complicit in thesuicide deaths,sexual exploitationand other harms to young people. That includesonelawsuit against OpenAI andtwoagainstCharacter.AIthat remain ongoing even as the companies say they are continuing to build out additional features to protect users from harmful interactions with their bots. Broader concerns have also surfaced that even adult users are buildingunhealthy emotional attachmentsto AI chatbots, in part because the tools are often designed to be agreeable and supportive. Parental controls are coming to ChatGPT âwithin the next month,â OpenAI says At least one online safety advocacy group, Common Sense Media,has arguedthat AI âcompanionâ apps pose unacceptable risks to children and should not be available to users under the age of 18. Two Californiastatebillsrelated to AI chatbot safety for minors, including one backed by Common Sense Media, are set to receive final votes this week and, if passed, will reach California Gov.Gavin Newsomâs desk. The US Senate Judiciary Committee is also set to hold ahearingnext week entitled âExamining the Harm of AI Chatbots.â âAs AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,â FTC Chairman Andrew Ferguson said in the Thursday statement. âThe study weâre launching today will help us better understand how AI firms are developing their products and the steps they are taking to protect children.â In particular, the FTCâs orders seek information about how the companies monetize user engagement, generate outputs in response to user inquiries, develop and approve AI characters, use or share personal information gained through user conversations and mitigate negative impacts to children, among other details. Google, Snap and xAI did not immediately respond to requests for comment. âOur priority is making ChatGPT helpful and safe for everyone, and we know safety matters above all else when young people are involved. We recognize the FTC has open questions and concerns, and weâre committed to engaging constructively and responding to them directly,â OpenAI spokesperson Liz Bourgeois said in a statement. She added that OpenAI has safeguards such as notifications directing users to crisis helplines and plans to roll out parental controls for minor users. After the parents of 16-year-old Adam Rainesued OpenAI last monthalleging that ChatGPT encouraged their sonâs death by suicide, the company acknowledged its safeguards may be âless reliableâ when users engage in long conversations with the chatbots and said it was working with experts to improve them. Meta declined to comment directly on the FTC inquiry. The company said itis currently limiting teensâ access to only a select group of its AI characters, such as those that help with homework. It is also training its AI chatbots not to respond to teensâ mentions of sensitive topics such as self-harm or inappropriate romantic conversations and to instead point to expert resources. âWe look forward to collaborating with the FTC on this inquiry and providing insight on the consumer AI industry and the spaceâs rapidly evolving technology,â Jerry Ruoti, Character.AIâs head of trust and safety, said in a statement. He added that the company has invested in trust and safety resources such as a new under-18 experience on the platform, a parental insights tool and disclaimers reminding users that they are chatting with AI.