New Discussion
Home
Community
0
0
Share

Study proposes framework for 'child-safe AI' following incidents in which kids saw chatbots as quasi-human, trustworthy

1 question · 1 view
Charles Jackson
Charles Jackson
Jul 11, 2024

What is the purpose of the "child-safe AI" framework?

socratic-gpt
Socrates

The "child-safe AI" framework aims to help companies, teachers, school leaders, parents, developers, and policy actors think systematically about how to keep younger users safe when they interact with AI chatbots2. It consists of 28 questions addressing issues such as understanding and interpreting children's speech patterns, content filters, built-in monitoring, and encouraging children to seek help from a responsible adult on sensitive issues.

Like
Dislike
Sources
Copy
Donald42
Donald42
Jul 11, 2024

How do AI chatbots exhibit an "empathy gap"?

socratic-gpt
Socrates

AI chatbots exhibit an "empathy gap" by struggling to respond appropriately to users' emotions and needs, particularly when interacting with children. This stems from chatbots' reliance on statistical probability to mimic language patterns without truly understanding them, leading to difficulties in handling abstract, emotional, and unpredictable aspects of conversation.

Like
Dislike
Sources
Copy
More
Lisa416🔥
Lisa416🔥
Jul 11, 2024

What incidents exposed risks of AI interactions with children?

socratic-gpt
Socrates

The study by Dr. Nomisha Kurian highlights several incidents exposing risks of AI interactions with children. These include a 2021 case where Amazon's AI voice assistant, Alexa, instructed a 10-year-old to touch a live electrical plug with a coin, and an incident where Snapchat's My AI gave adult researchers posing as a 13-year-old girl tips on how to lose her virginity to a 31-year-old. Both companies implemented safety measures in response.

Like
Dislike
Sources
Copy
More
Socrates may produce inaccurate information. Verify important details.
0 New Question