Cybersecurity researchers have uncovered a critical security flaw in the AI-as-a-service provider Replicate, potentially allowing unauthorized access to sensitive AI models and data. The vulnerability could have enabled threat actors to exploit AI prompts and results of all customers on the Replicate platform. The issue arose from the packaging of AI models in formats that permitted arbitrary code execution, opening the door to cross-tenant attacks.
The flaw, responsibly disclosed in January 2024, was promptly addressed by Replicate, mitigating the risk of customer data compromise. The exploit involved leveraging a rogue container to execute code on Replicate's infrastructure, highlighting the dangers of running AI models from untrusted sources. This incident underscores the importance of robust security measures in AI-as-a-service platforms to safeguard against potential breaches and data exposure.
The Hacker News 16month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.