The risks that generative artificial intelligence models will leak private health information, enable cyberattacks, and produce incorrect answers must be addressed by the technology sector, according to guidance laid out Friday by US government scientists. The National Institute of Standards and Technology called on technology companies to carefully review the data they use to train their models to ensure it doesn't include confidential information. It also emphasized the need to make it harder for humans to direct models to create obscene images.
Generative AI, a type of algorithm that can create text, images, and video in response to prompts from humans, poses challenges in maintaining data privacy and preventing misuse. The guidelines, though voluntary, aim to set a standard for global leaders and technology companies to discuss guardrails for the technology. Companies like Cisco Systems Inc. and International Business Machines Corp. are actively involved in shaping AI policies and strategies.
The Washington Post on MSN.com 10month
moneycontrol.com 12month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.