The global focus on training AI models and drafting related laws is insufficient, according to Dr. Rumman Chowdhury, the US science envoy for AI. She emphasizes the critical need for rigorous testing of AI systems to ensure their security and safety before deployment. Additionally, the enforceability of existing laws remains a significant challenge, particularly in the tech sector, where accountability mechanisms are often lacking.
Dr. Chowdhury highlights the importance of addressing bias and discrimination in AI, particularly as many models are trained on data that may not be representative of diverse populations. She points out that solutions for agricultural improvements, for instance, must consider local contexts to avoid biased outputs. Furthermore, building a robust AI talent pool through collaboration among public, private, and academic sectors is essential for the future of AI development.
• Testing AI for security and safety is currently lacking.
• Bias in AI models can lead to inappropriate outputs for diverse regions.
AI security involves ensuring that artificial intelligence systems are safe from vulnerabilities and threats.
Bias in AI refers to the tendency of AI models to produce unfair or prejudiced outcomes based on skewed training data.
AI testing is the process of evaluating AI systems to ensure they function correctly and safely before deployment.
Channel NewsAsia Singapore 11month
Defense News on MSN.com 11month
Time on MSN.com 8month
TechRadar on MSN.com 8month
Tech Xplore on MSN.com 16month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.