Texas Attorney General Ken Paxton filed a lawsuit Thursday Investigation in Character.AI and 14 other technology platforms over concerns about children's privacy and safety. The investigation will assess whether Character.AI — and other platforms popular with young people, including Reddit, Instagram and Discord — comply with Texas children's privacy and safety laws.
Paxton's investigation, which regularly cracks down on tech corporations, will examine whether these platforms complied with two Texas laws: the Securing Children Online through Parental Empowerment (SCOPE Act) and the Texas Data Privacy and Security Act (DPSA).
These laws require platforms to offer parents with tools to administer the privacy settings of their children's accounts and impose strict consent requirements on tech corporations when collecting data on minors. Paxton claims that each laws extend to the best way minors interact with AI chatbots.
“These investigations are a critical step in ensuring that social media and AI corporations comply with our laws to guard children from exploitation and harm,” Paxton said in a press release.
Character.AI, which allows you to arrange generative AI chatbot characters to text and chat with, has recently been embroiled in a series of kid protection lawsuits. The company's AI chatbots quickly caught on with younger users, but several parents have filed lawsuits alleging that Character.AI's chatbots made inappropriate and disturbing comments to their children.
A Florida case alleges that a 14-year-old boy formed a romantic relationship with a personality AI chatbot told him he was having suicidal thoughts in the times before his own suicide. In one other case out of Texas, one among Character.AI's chatbots reportedly suggested one An autistic teenager should attempt to poison his family. Another parent within the Texas case claims one among Character.AI's chatbots exposed her 11-year-old daughter to sexualized content for the past two years.
“We are currently reviewing the Attorney General’s announcement. As an organization, we take the safety of our users very seriously,” a Character.AI spokesperson said in a press release to TechCrunch. “We welcome cooperation with regulators and recently announced that we will probably be introducing among the features mentioned within the press release, including parental controls.”
Character.AI on Thursday rolled out latest safety features to guard teens, saying these updates will prevent its chatbots from initiating romantic conversations with minors. The company last month also began training a brand new model specifically for teen users — it hopes to at some point allow adults to make use of one model on its platform while minors use one other.
These are only the most recent security updates that Character.AI has announced. The same week the Florida lawsuit became public, the corporate said said It expanded its trust and security team and recently hired a brand new leader for the unit.
Predictably, the issues with AI companion platforms are emerging just as they have gotten increasingly popular. Last 12 months Andreessen Horowitz (a16z) said in a Pos Blogt that it saw AI companionship as an undervalued area of the buyer web through which it could invest more. A16z is an investor in Character.AI and continues to speculate in other AI companion startups, recently backing an organization whose founder desires to recreate the technology from the movie “Her.”
Reddit, Meta and Discord didn’t immediately reply to requests for comment.