Character AI, a platform that allows users to participate in AI Chatbots, has filed a motion due to a case of suicide, allegedly being surrounded by the company’s technology. In October, Megan Garcia filed a lawsuit against AI characters in the US District Court for the Middle District, because of the division of Orlando, Sati Divider III. According to Garcia, his 14-year-old son developed an emotional attachment to the Chatbot in the AI character, “Dany,” which he had been texting — where he began to pull away from the real world. After the army’s departure, the AI character said it will roll out a number of new safety features, including detection, response, and intervention related to conversations that violate the terms of service. But Garcia fought to keep the additions, including changes that could cause chatbots in AI characters to lose the ability to tell personal stories and anecdotes. In a motion to dismiss, counsel for the AI character argued the platform is protected against liability by the First Amendment, just like computer code. These motions may not persuade a judge, and AI’s legal description may change as a result. But movement can be signaled early in the defensive element of Ai. “The First Amendment prohibits the liability of media and technology companies arising from harmful speech, including suicidal speech,” the filing reads. “The only difference between this case and the previous one is that some of the speech here involved AI. But the context of the expressive speech — whether it’s a conversation with an AI chatbot or an interaction with a video game character — doesn’t change the First Amendment analysis.” The motion does not address whether AI characters may be harmless under Section 230 of the Communications Decency Act, a federally protected law that protects social media and other online platforms from third parties. The author of the regulation has provided a section in 230 not protecting the output of AI like Chatbots AI like Chatbots Ai, but it is far from the legal problem that resides. The counsel for the AI character also claimed that Garcia’s real intention was to “death” the AI character and the directions of the Rules Governing Technology as such. Should the plaintiff succeed, it will have a “chilling effect” on the character of AI and the generative AI industry, the entire generative AI industry, the instructions for the platform said. “Two of the purposes of the tutorial are to ‘kill’ the AI character, [their complaint] “Seeking drastic changes that will limit the nature and volume of speech on the platform,” the filing reads. The letter AI Parent Adud is the defendant, but it is one of the passwords for the AI character in relation to the content created by AI on the platform. The other corresponds to the AI character removing the age of 9 for “Hypersexualized content” and self-promotion for age users 17 years. In December, Texas Attorney General Ken Paxton announced that he launched an investigation into the character of AI and 14 other companies that were prosecuted in the state’s online laws for children. “This investigation is a critical step to ensure social media and AI companies comply with our laws designed to protect children from exploitation and harm. AI characters are part of the industry of friends of friends of friends of good friends – the effects of which are generally unstable mental health. Some experts have expressed concern that this app can help with feelings of loneliness and anxiety. Character AI, which was founded in 2021 by Google AI Researcher Noam, and which Google reported 2.7 billion to “fix the reverse of $ 2.7 billion”, “has claimed that it continues to implement safety and moderation. In December, the company rolled new safety tools, a separate AI model for teenagers, a block on sensitive content, and a more important message for users that the AI character is not another person freitaas, left for Google The platform hired a former YouTube exec as chief product officer, and Dominika Perella, who was general counsel for AI, as interim CEO.