The National Institute of Standards and Technology (NIST) has re-released a testbed called Dioptra to measure how malicious attacks, like poisoning AI model training data, can impact AI system performance. Dioptra is an open-source web-based tool designed to help companies and users assess, analyze, and track AI risks by benchmarking and researching models. It aims to provide a common platform for exposing models to simulated threats in a 'red-teaming' environment.
Dioptra's goal is to test the effects of adversarial attacks on machine learning models and help evaluate AI developers' claims about their systems' performance. The tool was developed in response to President Joe Biden's executive order on AI, which mandates NIST to assist with AI system testing and establish standards for AI safety and security. However, Dioptra has limitations, as it currently only works on models that can be downloaded and used locally.
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.