The UK AI Safety Institute has unveiled 'Inspect,' an open-source toolset designed to evaluate AI models. This platform aims to address the lack of consistent and accessible approaches to AI safety evaluations by providing a standardized way to assess various aspects of AI models. Inspect consists of datasets for test scenarios, solvers to execute tests, and scorers to analyze results, all under a permissive MIT license.
By releasing Inspect, the UK is positioning itself as a global leader in AI safety, emphasizing the importance of managing AI risks. The toolset allows for collaboration among different stakeholders, including startups, researchers, developers, and government bodies, to enhance AI safety testing. The UK's initiative aligns with international efforts to establish trustworthy AI standards and promote responsible AI development worldwide.
TechRepublic 11month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.