Chat robot platform Character AI claims to be protected by the first amendment in the rejection of the motion

Character AI is a platform that allows users to play role -playing with artificial intelligence chat robots. The platform has proposed a motion to reject a lawsuit filed by a parent of a suicide teenager. After committing suicide.

In October, Megan Garcia filed a lawsuit against Character AI on the death of his son Sewell Setzer III. According to Garcia, her 14 -year -old son has an emotional attachment to the chat robot "Danny" on the character AI. He constantly send it to him, so that he began to leave the real world.

After the death of Setzer, Character AI said it would launch a series of new security functions, including the improvement, response and intervention related to chatting with its service terms. However, Garcia is striving for additional guardrails, including changes in the ability of chat robots that may lead to artificial intelligence in the character's artificial intelligence.

In the dismissal of the motion, the lawyer of Character AI claimed that the platform was protected by the first amendment and was exempted from responsibility, just like a computer code. The motion may not persuade the judge, and with the progress of the case, the legal basis of Character AI may change. But this action may imply the early elements of the character's artificial intelligence defense.

The document reads: "The first amendment prohibits the media and technology companies from being liable for infringement due to suspected harmful remarks (including suspicion of suicide)." "This case is only different from the previous case. Artificial intelligence but expression of expression -whether it is a conversation with artificial intelligence chat robot or interaction with video game characters -it will not change the analysis of the first amendment. "

It should be clear that the lawyer of Character AI does not claim the company's right to amend the first amendment. On the contrary, the motion believes that the character's artificial intelligence user If the litigation of the platform is successful, their first amendment rights will be violated.

The motion did not involve whether Article 230 of the Communications Standards Act can prevent damage from Character AI. The bill is a federal security port law, which aims to protect social media and other online platforms from third -party content. The author of the law hinted that Article 230 does not protect artificial intelligence output like character AI chat robots, but this is far from being a solution to a legal issue.

The lawyer of Character AI also claimed that Garcia's true intention was to "close" Character AI and promote legislative standards. The lawyer of the platform said that if the plaintiff won the lawsuit, this would have a "cold cicada effect" for the character's artificial intelligence and the entire emerging artificial intelligence industry.

The document reads: "In addition to the intention of the lawyer's" closing 'chaper AI, (their complaints) also seek major changes, thereby greatly restricting the nature and volume of the remarks on the platform. "" These changes will fundamentally fundamentally fundamental Restricting the ability of millions of users to generate and participate in character dialogue by the character's artificial intelligence.

The lawsuit also listed the Character AI parent company Alphabet as the defendant. This is just one of the lawsuits that the CHARACTER AI involves how minors interact with their artificial intelligence production content on the platform. According to other lawsuits, Character AI let a 9 -year -old user exposed to "content of hypertrophy" and preached to a 17 -year -old user.

In December last year, Ken Paxton, the chief prosecutor of Texas, announced that he would investigate the behavior of Character AI and 14 other technology companies suspected of violating the online privacy and security law of children in the state. In a press release, Pakiston said: "These investigations are a key step to ensure that social media and artificial intelligence companies comply with the aimed to protect children from exploiting and harm."

Character artificial intelligence is part of the vigorous artificial intelligence partner application industry -its impact on mental health has not been studied to a large extent. Some experts are worried that these applications may exacerbate loneliness and anxiety.

Character AI was founded by Google AI researcher Noam Shazeer in 2021. According to reports, Google paid $ 2.7 billion for "reverse acquisition", which claims that the company will continue to take measures to improve safety and review. In December last year, the company launched new security tools, independent artificial intelligence models for young people, shielding sensitive content, and more prominent disclaimer, notifying users that their artificial intelligence characters are not real people.

After Shazeer and another co -founder Daniel de Freitas, the Character AI experienced many personnel changes. The platform hired former YouTube executive Erin TeaGue as chief product officer and appointed Dominic Perella (general legal adviser to Character AI) as an interim CEO.

The character AI has recently tested games on the Internet to increase user participation and reservation rate.

TechCrunch has a current affairs communication that focuses on artificial intelligence! Register here and send it to your inbox every Wednesday.