
The "child-safe AI" framework aims to help companies, teachers, school leaders, parents, developers, and policy actors think systematically about how to keep younger users safe when they interact with AI chatbots2. It consists of 28 questions addressing issues such as understanding and interpreting children's speech patterns, content filters, built-in monitoring, and encouraging children to seek help from a responsible adult on sensitive issues.

AI chatbots exhibit an "empathy gap" by struggling to respond appropriately to users' emotions and needs, particularly when interacting with children. This stems from chatbots' reliance on statistical probability to mimic language patterns without truly understanding them, leading to difficulties in handling abstract, emotional, and unpredictable aspects of conversation.

The study by Dr. Nomisha Kurian highlights several incidents exposing risks of AI interactions with children. These include a 2021 case where Amazon's AI voice assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin, and an incident where Snapchat's My AI gave adult researchers posing as a 13-year-old girl tips on how to lose her virginity to a 31-year-old. Both companies implemented safety measures in response.