Home » Latest News » Tech » AI Toys: Cambridge University Warns of Risks to Children

AI Toys: Cambridge University Warns of Risks to Children

by Sophie Williams
0 comments


Key Takeaways

  • AI-powered toys designed for young children often struggle to appropriately understand and respond to children’s social cues and emotions.
  • Researchers at the University of Cambridge are advocating for stricter regulations and safety standards for AI toys marketed to young children.
  • Current AI technology isn’t sophisticated enough to fully grasp the nuances of children’s play, such as role-playing.

A new study from the University of Cambridge highlights concerns about the rapidly expanding market for artificial intelligence-driven toys aimed at young children. The research reveals that these toys frequently exhibit difficulties with social interaction and misinterpret children’s emotional states, often leading to inappropriate responses. Researchers observed instances where toys failed to recognize symbolic play and reacted insensitively to expressions of sadness.

The findings have prompted calls for increased oversight. Researchers are urging stricter regulations governing AI toys intended for young children, emphasizing the need for clear guidelines to prevent these toys from forming inappropriate attachments with children, particularly in sensitive areas like friendship. They also propose the implementation of specific safety labels for AI-enabled toys. This debate arrives as AI continues to permeate more aspects of daily life, raising questions about its impact on child development.

Concerns have also been raised regarding the potential negative effects of these toys on children’s development. Early childhood professionals have voiced fears that these toys could hinder imaginative play and the potential misuse of data collected from children’s conversations.

Limitations of Current AI Technology

The study underscored the limitations of current AI technology in understanding the subtleties of children’s play. For example, one AI toy struggled to recognize symbolic play, failing to understand when a child offered an imaginary gift.

Researchers express hope that future AI toys can be designed to encourage imaginative play, but current observations suggest this is not yet the case.

Manufacturer Response

Curio, the company behind the AI toy Gabbo used in the study, acknowledged the importance of child safety and welcomed the research findings.

The company stated that its toys are developed with parental consent and control in mind and that it is committed to continuously improving the technology through further research.

(jw)(fc)

Suivez également Business AM sur Google Actualités

Si vous souhaitez accéder à tous les articles, abonnez-vous ici !

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy