Legal measures aimed at restricting children’s access to AI-based chatbots have gained momentum in the US Congress. The Senate Judiciary Committee has unanimously approved and sent to the Senate a bill that requires technology companies to verify ages and prohibits providing “AI companions” to minors.
“Elchi” reports that the rule, called the GUARD Act, requires AI tools to periodically inform users that they are not human and do not possess professional credentials.
The bill also provides for severe penalties for companies that develop AI systems that encourage children to share sexually explicit content or commit suicide.
Simultaneous move in the House of Representatives
Concurrent with the development in the Senate, a similar bill has been introduced in the House of Representatives.
Legal experts have emphasized that children’s development should be supported by real-world interactions rather than uncertain technologies on digital platforms.
Attention has been drawn to the risks of AI robots to children’s mental health, and it has been stated that Congress must take urgent measures regarding this issue. The search for regulation intensified after some parents accused AI applications of driving their children to violence or suicide. Currently, popular platforms such as ChatGPT, Google Gemini, and Meta AI allow children over the age of 13 to use their services.
Şayəstə