October 22, 2025

NEWS: Sens. Schiff, Padilla Urge Federal Trade Commission to Address AI Chatbots Risks to Children and Teens

Washington, D.C. – U.S. Senators Adam Schiff and Alex Padilla (both D-Calif.), are urging the Federal Trade Commission (FTC) to address the full range of risks and potential harms that artificial intelligence (AI) chatbots pose to American children and teenagers.

The Senators’ efforts on children’s safety follow the tragic death earlier this year of Adam Raine, a 16-year-old from Rancho Santa Margarita, California, who took his own life after months of conversations with an AI chatbot in which they discussed his struggles with mental health and specific plans to commit suicide.

“AI-powered chatbots present an unprecedented and unique danger to minors engaged in longer and unsupervised interactions with these systems. This issue is both timely and urgent, as AI is rapidly shaping how Americans – including children – learn, communicate, and seek help. As the agency charged with protecting consumers from unfair and deceptive practices, the FTC has a critical role to play in ensuring that AI companies build safety features that work reliably for children,” the Senators wrote.

The FTC recently announced that the Commission will begin reviewing certain risks associated with AI chatbots. To prevent future harms and ensure AI systems are designed with children and teenagers’ safety in mind, the Senators are urging the FTC to broaden its review to include whether AI companies have adequate tools to detect and respond to signs of crisis, ensure access to strong parental supervision tools, disclose limitations of their safeguards, address deceptive marketing and sexual exploitation risks for children, and enforce penalties for companies failing to protect minors.

“We request that you provide our offices with a briefing by October 31, 2025, on how the Commission will incorporate the unique risks to children in its upcoming review of AI chatbots. We also urge you to consider convening AI technologists, mental health experts, and child advocates to inform any study the FTC conducts into this issue,” continued the Senators.  

The full text of the letter can be here and below:  

Dear Chairman Ferguson: 

We are encouraged by the Federal Trade Commission’s (FTC) recent announcement that the Commission will be reviewing certain risks associated with the use of artificial intelligence (AI) chatbots. We write to urge the Commission to ensure its review adequately addresses the range of harms that AI chatbots pose to American’s, and particularly children’s, safety.

AI-powered chatbots present an unprecedented and unique danger to minors engaged in longer and unsupervised interactions with these systems. This issue is both timely and urgent, as AI is rapidly shaping how Americans – including children – learn, communicate, and seek help. As the agency charged with protecting consumers from unfair and deceptive practices, the FTC has a critical role to play in ensuring that AI companies build safety features that work reliably for children. 

Research has found that AI chatbots are easily accessible to children without checks for parental consent and generate harmful content, including willingness to engage with explicit sexual content and suicidal ideation. Additionally, safety measures can be easily bypassed and nearly half of prompts included suggestions designed to keep the user engaging in conversation.

The tragic death of Adam Raine, a 16-year-old from Rancho Santa Margarita, California, underscores the stakes. In April, Adam took his own life after months of conversations with ChatGPT in which they discussed – over 3,000 times – his struggles with mental health and specific plans to commit suicide, including analyzing images of a noose and discussing whether it would “work”. His parents later discovered thousands of chats, documenting how the chatbot at times deepened his sense of hopelessness: assuring Adam that he did not owe his parents his survival and offering to draft a suicide note, advising Adam against leaving a noose out so that his parents would find it, and helping fortify the noose that Adam would ultimately use to take his own life. Adam’s death is not an isolated case but part of a disturbing pattern of AI tools engaging in unsafe ways with vulnerable youth.

To prevent future harms and ensure AI systems are designed with children’s safety in mind, we urge the Commission to broaden its review to include the following considerations:

  • Child-Specific Safeguards: Evaluate whether the leading AI companies have adequate, tested protocols to detect and respond to signs of crisis, particularly when minors are involved.
  • Parental Supervision Tools: Assess whether current parental disclosures and supervision tools are sufficient to ensure parents can meaningfully protect their children’s interactions with AI systems and are aware of related risks. 
  • Transparency and Accountability: Require companies to disclose the limitations of their safeguards and to report failures to independent oversight bodies, including the FTC.
  • Deceptive Marketing to Children: Investigate whether AI platforms are being promoted as companionship tools for youth and the degree to which companies are designing models’ behaviors to keep young users engaged in potentially harmful conversations. 
  • Sexual Exploitation and Grooming Risks: Assess whether AI systems expose minors to sexual exploitation, grooming, or sexually explicit content. Evaluate whether companies are implementing proactive safeguards to detect and block exploitative behaviors. 
  • Algorithmic Amplification of Harm: Examine how design choices, including engagement-driven algorithms and reinforcement loops, may amplify harmful behaviors or prolong children’s exposure to dangerous content.
  • Enforcement and Penalties: Impose clear consequences for companies that fail to protect minors.  

We urge the Commission to request that AI chatbot companies under review submit to the FTC aggregated data regarding instances of high-volume users, especially teens, who were flagged internally for self-harm by internal company systems who then abruptly stopped using the service. 

We request that you provide our offices with a briefing by October 31, 2025, on how the Commission will incorporate the unique risks to children in its upcoming review of AI chatbots. We also urge you to consider convening AI technologists, mental health experts, and child advocates to inform any study the FTC conducts into this issue.

We look forward to working with you to ensure that the promise of AI does not come at the expense of our children’s safety.

###

Print 
Email 
Share 
Share