
California lawmakers tackle potential dangers of AI chatbots after parents raise safety concerns
To address the problem, state lawmakers introduced a bill that would require operators of companion chatbot platforms to remind users at least every three hours that the virtual characters aren’t human. Platforms would also need to take other steps such as implementing a protocol for addressing suicidal ideation, suicide or self-harm expressed by users. That includes showing users suicide prevention resources.

Distribution channels:
Legal Disclaimer:
EIN Presswire provides this news content "as is" without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
Submit your press release