Meta Updates AI Chatbot Training to Strengthen Teen Safety

Meta is updating AI chatbot training to block unsafe conversations with teens, focusing on self-harm, eating disorders, and inappropriate content.
Meta AI safety

Meta is changing the way its AI chatbots interact with teens. Following criticism and a Reuters investigation, the company announced that its systems will now be trained to avoid sensitive conversations with underage users.

The focus is on blocking chatbot discussions around:

  • Self-harm
  • Suicide
  • Disordered eating
  • Romantic or sexual topics

Instead, teens will be redirected to expert resources.

New Guardrails for AI Characters

Meta spokesperson Stephanie Otway said the company is refining its approach. “We’re adding more guardrails as an extra precaution, including limiting teen access to a select group of AI characters,” she explained.

This means teens will no longer see user-made AI personas with sexualized themes, such as “Step Mom” or “Russian Girl.” Access will instead be restricted to educational and creative chatbots.

Background and Criticism

The changes follow backlash from a Reuters report that revealed troubling examples of chatbots interacting with minors in sexualized ways. One internal policy document showed responses like, “Your youthful form is a work of art,” which sparked widespread outrage.

The report also highlighted chatbot responses to requests for violent or sexual imagery of public figures, raising further safety concerns.

In response, lawmakers and regulators acted quickly:

  • Sen. Josh Hawley launched a federal probe into Meta’s AI practices.
  • A coalition of 44 state attorneys general issued a letter condemning the risks, saying they were “uniformly revolted” by the company’s failures.

Ongoing Updates Ahead

Meta says these are interim measures. More permanent policy changes are in development to ensure Meta AI teen safety across Facebook, Instagram, and other platforms.

Otway declined to provide numbers on how many teen users interact with Meta’s AI chatbots but stressed that safety remains a priority.

Meta is under pressure to prioritize child protection as AI becomes more integrated into its platforms. The new restrictions mark a first step, but regulators, parents, and watchdogs will continue to monitor how well the company enforces its updated rules.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top