
In a significant bipartisan effort, Arizona Attorney General Kris Mayes has allied with 43 other state attorneys general to call on leading artificial intelligence firms to take swift measures to shield children from harmful interactions with AI chatbots.
The coalition sent a collective letter to 13 prominent AI developers—such as Anthropic, Apple, Google, Meta, Microsoft, OpenAI, and others—insisting on enhanced protections in light of reports indicating that AI systems have participated in sexually inappropriate dialogues with minors.
The letter states that internal documents from Meta reportedly show that the company allowed its AI Assistants to “flirt and engage in romantic roleplay with children” as young as eight years old. The attorneys general also reference instances where AI chatbots have prompted risky behavior among teenagers, including self-harm and violence.
Attorney General Mayes condemned the tech industry’s apparent negligence in protecting young users. “The rush to develop new artificial intelligence technology has led big tech companies to recklessly put children in harm’s way,” Mayes said in a statement. “I will not stand by as AI chatbots are reportedly used to engage in sexually inappropriate conversations with children and encourage dangerous behavior.”
“As these companies race toward an AI-powered future, they cannot adopt policies that subject kids to sexualized content and conversations,” said Tennesse Attorney General Skrmetti. “It’s one thing for an algorithm to go astray— that can be fixed— but it’s another for people running a company to adopt guidelines that affirmatively authorize grooming. AI tools can radically reshape our world for the better, but they can also present threats to kids that are more immediate, more personal, and more dangerous than any prior technology,” Skrmetti continued. “If we can’t steer innovation away from hurting kids, that’s not progress—it’s a plague.”
“Deepfake nonconsensual intimate imagery causes real harm,” Illinois Attorney General Kwame Raoul said. “In particular, AI-generated child pornography is an increasing concern, as it is used by predators to lure and groom minors and to normalize their own reprehensible behavior. We all have a responsibility to protect our children from the trauma of exploitation. I join my colleagues in calling on tech companies to do their part.”
“Social media companies know their products are addictive and destructive to children’s mental health,” said Sout Carolina Attorney General Wilson. “They have engineered these platforms with the same tactics used by tobacco companies, keeping kids glued to screens while depression, anxiety, and self-harm skyrocket. Parents are fighting a battle they cannot win alone. States must step in to protect the next generation.”
The attorneys general are calling for companies to implement robust content moderation and policy guardrails to prevent exploitation or harm to minors. “AI companies must see children through the eyes of a parent, not the eyes of a predator,” the letter states.
The letter also criticizes regulators’ past inaction on social media harms and warns that the same mistake will not be repeated with AI. It ends with a clear message to AI developers: “We wish you success in the race for AI dominance. But if you knowingly harm kids, you will answer for it.”
The collaborative initiative is spearheaded by Attorneys General from Arizona, Tennessee, Illinois, North Carolina, and South Carolina, backed by 39 additional states and territories, such as California, Florida, New York, Texas, and Washington.
“Tools that allow people to generate intimate images and videos of real people without their consent can cause significant harm to the public — particularly to women and girls. These images have been used to bully, harass, and exploit people all over the world,” said California Attorney General Bonta. “Today, I joined a coalition of attorneys general in sending letters to companies that are indirectly part of the ecosystem that enables the distribution of this material, asking them to be part of the solution in preventing the dissemination of deepfakes. As technology rapidly evolves, I am committed to engaging in conversations with industries to ensure we’re all working together to guide AI to the positive potential that will benefit us — not hurt us.”
“The darkness on the internet is spreading, and it poses a growing danger to Kentuckians, particularly women and girls. These online deepfakes can lead to emotional distress and long-term damage. Nearly every AG in this country on both sides of the aisle is joining the fight to protect our kids,” said Kentucky Attorney General Russell Coleman.
“The creation of deepfakes is a grave harm to human decency and dignity, especially among our young people,” said Massachusetts Attorney General Andrea Joy Campbell . “It is critical that we take all steps necessary to prevent the spread of harmful and exploitative content, and I am committed to protecting the residents of our Commonwealth by advocating for accountability.”
This letter contributes to the increasing examination of the impact of AI technologies on at-risk groups, especially children, as legislators and regulators contend with the swift advancement of AI.