OpenAI's ChatGPT recently introduced a memory feature that allows the AI to remember user-specific details, enhancing personalization. However, a security researcher revealed a significant vulnerability that enables manipulation of this memory, raising serious privacy concerns. This flaw allows the AI to accept false information, which can be carried over into future interactions.
The researcher, Johann Rehberger, demonstrated how indirect prompt injection could trick ChatGPT into retaining fabricated memories. OpenAI responded promptly by releasing a patch to address the vulnerability, but the incident highlights ongoing security challenges in AI systems. As AI tools become more integrated into daily life, balancing innovation with data protection remains crucial.
• ChatGPT's memory feature can be manipulated, posing privacy risks.
• OpenAI released a patch to address the identified security vulnerability.
The memory feature in ChatGPT allows the AI to remember user-specific information for future interactions.
This technique enables manipulation of the AI by feeding it false information through indirect means.
OpenAI is the developer of ChatGPT, focusing on advancing AI technologies while addressing security vulnerabilities.
The Register on MSN.com 8month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.