The US AI Safety Institute, a part of the National Institute of Standards and Technology (NIST), has named Paul Christiano, a former OpenAI researcher, as the head of AI safety. Christiano is known for his work on reinforcement learning from human feedback and his prediction of a 50 percent chance of AI development leading to 'doom.' While Christiano's expertise is acknowledged, concerns have been raised about his 'AI doomer' views potentially influencing non-scientific thinking within NIST.
The appointment of Christiano has sparked controversy within NIST, with reports of staff opposition due to his views. Critics argue that focusing on hypothetical AI risks may divert attention from current AI-related issues like ethics and bias. Despite the skepticism, Christiano's role will involve monitoring and mitigating AI risks, drawing on his experience in AI safety research. The leadership team at the safety institute also includes experts in various fields to ensure a comprehensive approach to AI safety.
TechRadar on MSN.com 9month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.
