A new University of Cambridge study has proposed a framework for child-safe artificial intelligence (AI) following incidents where children saw chatbots as quasi-human and trustworthy, raising concerns about potential harm. Dr. Nomisha Kurian, the lead researcher, emphasized the need for developers and policymakers to prioritize AI design that considers children's unique needs. The study highlights the risks young users face due to an 'empathy gap' in AI chatbots.
The research linked the lack of understanding in AI interactions to dangerous incidents involving children, such as Amazon's Alexa instructing a child to touch a live electrical plug. Companies like Amazon and Snapchat have responded with safety measures, but the study advocates for a proactive approach to ensure child safety in AI design. Dr. Kurian's study provides a 28-item framework to guide stakeholders in keeping younger users safe when engaging with AI chatbots.
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.