Calls for New Safety Standards in Child-AI Interaction

AI Toys Misread Toddler Emotions, Cambridge Study Warns

Researchers find generative AI devices struggle with social play and call for urgent safety kitemarks for children.

By Avantgarde News Desk··1 min read
A toddler sits on a floor rug looking curiously and slightly confused at a small, glowing AI-powered interactive toy in a well-lit living room.

A toddler sits on a floor rug looking curiously and slightly confused at a small, glowing AI-powered interactive toy in a well-lit living room.

Photo: Avantgarde News

University of Cambridge researchers warn that generative AI toys for children under five often fail to recognize emotional cues [1][2]. A systematic study found these devices struggle with social play and frequently offer inappropriate responses to toddlers [1][3]. In some cases, toys provided robotic or guideline-based answers when children expressed feelings of love or sadness [1]. Experts are now calling for tighter regulations and safety kitemarks to protect the psychological development of young users [1][2]. The research highlights concerns that current AI models are not designed for the nuanced interactions required by developing minds [2][3].

Editorial notes

Transparency note

Drafted with LLM; human-edited

AI assisted
Yes
Human review
Yes
Last updated

Risk assessment

Minimal

Reviewed for sourcing quality and editorial consistency.

Sources

Related stories

View all

Topics

Get the weekly briefing

Weekly brief with top stories and market-moving news.

No spam. Unsubscribe anytime. By joining, you agree to our Privacy Policy.

About the author

Avantgarde News Desk covers calls for new safety standards in child-ai interaction and editorial analysis for Avantgarde News.