The article discusses the importance of oversight for AI technologies in national security agencies due to the significant risks they pose to privacy, civil rights, and civil liberties. While the Office of Management and Budget has issued guidance mandating transparency and risk management practices for federal AI use, national security systems are exempt from these rules. This exemption raises concerns about oversight gaps, especially in challenging national security decisions affecting individuals. The lack of transparency in AI systems exacerbates these issues, highlighting the need for immediate attention and oversight.
To address these challenges, the article proposes looking at the Privacy and Civil Liberties Oversight Board (PCLOB) as a model for independent oversight. However, the current mandate and capacity of PCLOB are insufficient to oversee all AI systems used in national security. Suggestions are made for expanding PCLOB's jurisdiction and resources or creating a new oversight body with a broad mandate covering all AI used in national security systems. The article emphasizes the importance of independence, mandate clarity, leadership, resources, access to information, and promoting transparency and accountability in overseeing AI in national security.
Brennan Center for Justice 17month
Biometric Companies 11month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.