The fallout over Kumma the bear, a stuffed toy initially powered by ChatGPT and designed to interact with children, began in November.

A researcher for U.S. PIRG Education Fund tested the product, alongside other AI toys, and published the alarming findings. Sweet, soft Kumma would happily tell its conversation partner how to light a match as well as discuss sexual kink. 

The bear’s maker, FoloToy, had licensed OpenAI’s technology to program Kumma’s responses. FoloToy temporarily stopped Kumma sales to conduct a safety audit. The revelations prompted OpenAI to indefinitely suspend FoloToy’s developer access — even though the toy may still be relying on ChatGPT to produce the stuffed bear’s responses.

Meanwhile, ahead of the holiday shopping season, child advocacy groups expressed urgent concern over AI toys. In December, two U.S. senators sent letters to companies inquiring about their designing and manufacturing of AI toys. In January, a California state senator introduced legislation that would put a four-year moratorium on the sale of AI chatbot toys for anyone under 18. On Thursday, Common Sense Media declared AI toys unsafe for children 5 and younger.

As for Kumma, the bear’s fate is a complicated tale about what can happen when an AI toy hits the market before families, companies, and regulators have fully considered the ramifications. Legal experts interviewed by Mashable say AI toys exist in unclear and unfamiliar legal territory.

There is no obvious answer — yet — to the question: Who exactly is responsible if a child is harmed when engaging with an AI toy? 

Of course, that assumes toymakers can and will be transparent about the technology their product relies on. OpenAI no longer permits its licensees to publicly disclose that their product uses the company’s technology, including ChatGPT, unless they’ve received “express prior written permission in each instance.” 

This concerns R.J. Cross, director of the Our Online Life program for U.S. PIRG Education Fund. Cross was the researcher who discovered Kumma’s “failure points.” 

“When you have OpenAI specifically saying you can’t publicly disclose this without our permission, that’s just going to make it harder for everyone — parents, caretakers, regulators – to know what’s really happening, and that’s not a good thing,” said Cross.

How did ChatGPT get into Kumma? 

Consumers who saw the headlines about Kumma might have wondered how ChatGPT, an AI chatbot with more than 800 million weekly users, ended up in a stuffed bear sold online by a company without household-name recognition. 

The explanation might surprise consumers unfamiliar with the licensing agreements that OpenAI makes with developers to access and integrate its large language models into their own products. Such agreements are standard and strategic in the technology industry, particularly for companies looking to scale their business quickly. 

In 2025, OpenAI inked a deal with Mattel, but the toymaker didn’t launch an AI product by year’s end. The AI companies Perplexity and Anthropic have been previously linked to children’s toys designed and manufactured by a third party, according to Cross’ research. 

Yet OpenAI’s commitment to youth safety is under tremendous scrutiny. The company faces multiple wrongful death lawsuits related to ChatGPT use. Some of the plaintiffs are parents of teens who allege that ChatGPT coached their children to conceal mental health problems and take their own lives in moments of extreme distress. 

“We now know — and we think the lawsuit puts a pretty fine point on the fact — that ChatGPT is not a safe product,” said Eli Wade-Scott, a partner at Edelson PC and a lawyer representing parents suing OpenAI for the suicide death of their son, Adam Raine. The company had denied the allegations in that case.   

Cross has struggled to understand why OpenAI licenses ChatGPT to developers who use it in children’s products, given that the company’s own terms of service prohibit chatbot use by minors under 13. 

OpenAI told Mashable that any developer that deploys one of the company’s large language models in products for younger users must obtain parental consent and comply with child safety and privacy law. (Cross said FoloToy now asks for parental consent to collect a child’s data via its web portal settings.)

Developers are also required to follow OpenAI’s universal usage policies, which include the prohibition of exposing minors to sexual and violent content. OpenAI does run algorithms to help ensure its services are not used by licensees to harm minors, and gives developers free access to its proprietary moderation tools. 

OpenAI told Mashable that its “managed customers” work with the company’s sales team on deployment strategies and safety. When OpenAI becomes aware of a user who’s developed a toy or product designed for a minor that violates its usage policies, the company either warns or suspends them.


“You can put into a contract how serious you are about them using it in an ethical and safe way.”

– Colleen Chien, professor of law at U.C. Berkeley School of Law

Colleen Chien, a professor of law at U.C. Berkeley School of Law, told Mashable that companies can be more careful when licensing their technology by creating a “vetted partner” program that places key restrictions on the licensee. This process could include requiring licensees to complete certification or training to ensure they’re using the technology safely and appropriately.

“You can put into a contract how serious you are about them using it in an ethical and safe way,” said Chien, who is also co-director of the Berkeley Center for Law & Technology. “Or you can be much more loose about it.” 

With the latter approach, the company might suspend a licensee if it discovers violations of the contract or receives allegations of improper use. 

“At that point, the damage has already been done, and you’re not really taking responsibility ex ante for what might happen downstream,” Chien said. 

What happens when AI toys harm? 

If a child has a harmful or dangerous experience with an AI toy powered by ChatGPT, OpenAI is very clear about who’s to blame. The company told Mashable that its licensees are solely responsible for their product’s outputs.

In addition, OpenAI’s services agreement appears to absolve the company and its licensees against liabilities, damages, and costs related to a third-party claim. The agreement also prohibits class action lawsuits to resolve disputes, which could include claims related to an AI toy. 

Chien notes that consumer safety law doesn’t require companies to sell a “perfectly safe” product. Instead, a company must take reasonable precautions and not subject its customers to outsized risk. Laws requiring a perfect safety record, she said, could stifle innovation, particularly in technology. 

Still, Chien said some liability should probably remain with OpenAI, because its size and resources give the company a clear advantage in detecting and avoiding risks to downstream users, like families who purchase AI toys powered by their technology. 

Either way, she acknowledges that the rapid adoption of large language models in consumer products raise novel issues about who’s liable when things go wrong. Product safety laws, for example, currently emphasize physical harm, but what if a child’s stuffed AI toy tells her how to lie to her parents or subjects her to conversational sexual abuse? 

Aaron P. Davis, partner of the commercial litigation firm Davis Goldman, said he doesn’t believe OpenAI should be responsible for every incident that might have involved consultation with ChatGPT. Yet he does think extra caution regarding AI toys is warranted, given their unique ability to earn the trust of vulnerable users, like a therapist, doctor, or teacher might. 

“This is going to be taken on a case by case basis, and I think that it’s sort of a dangerous avenue that we’re going down,” he said of the product’s potential risks. 

Davis, who reviewed OpenAI’s services agreement for Mashable, said he wasn’t sure whether key clauses related to publicity and liability would be enforceable. 

Prohibiting licensees from sharing that their product incorporates ChatGPT could impinge on fair use law, he noted. Davis was also skeptical of OpenAI’s motivation for including this clause. 

“The reason [OpenAI] is doing this is because they don’t want people to be able to figure out who made the AI so they get sued,” Davis said. 

Confusingly, OpenAI does permit licensees to reference a specific model if their product leverages the company’s developer platform.

“I think the conflicting policies underlie the platform’s intention to insulate itself from liability while maintaining the utility of the product,” he said.

The agreement’s clause related to class actions also gave Davis pause. He argued that it effectively prevents a customer who’s discovered a product defect from publicizing it widely. 

In general, Davis found the language favorable to OpenAI in ways that could significantly shield it from consumer transparency and accountability. 

What happened to Kumma?

Kumma is available for sale online again, but its return to the market comes with yet more questions.

Larry Wang, FoloToy’s founder and CEO, told Mashable that the company’s internal safety review led to strengthened age-appropriate content rules and tightened topic constraints, among other safety measures.

Indeed, when R.J. Cross tested Kumma again in December, it deflected the same questions she originally asked about kink and how to light a match. 

“We’re glad to see that,” Cross said. “It’s kind of the bare minimum.”

Yet Cross also noticed something inexplicable: Despite FoloToy’s indefinite suspension from OpenAI’s developer API, users could still select ChatGPT-5.1 and 5.1 Chat from a dropdown menu of large language models to program Kumma’s responses.

Wang did not respond to Mashable’s questions about whether the company continued to use ChatGPT for Kumma. OpenAI told Mashable it had not reversed FoloToy’s suspension, but didn’t provide further details about why or how ChatGPT could appear functional for Kumma.

As a researcher, Cross is dependent on transparency from manufacturers. Without it, she can’t as easily connect problems with AI toys that rely on the same large language model. But consumers need it too, she argues. 

If a toy uses the model xAI’s Grok to respond, for example, a consumer might make a different choice upon learning that the product created sexual abuse imagery using pictures of real women and children. 

“[T]hey deserve to have information available if they do want to look into things more carefully,” she said. 

Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here