Concerns about biosecurity risks from AI models in biology are escalating. Doni Bloomfield and colleagues advocate for enhanced governance and mandatory pre-release safety evaluations to mitigate these risks. They emphasize the need for national legislation to prevent advanced biological models from contributing to large-scale dangers like pandemics.
While biological AI models offer significant benefits, such as improving drug design and agricultural resilience, they also pose serious risks. The authors highlight the inadequacy of voluntary safety measures and call for standardized evaluations to ensure safety before model release. Policies should focus on high-risk models while allowing scientific exploration of their potential benefits.
News Medical on MSN.com 12month
thetechedvocate.org 11month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.