Recent research from the University of Minnesota School of Public Health reveals significant disparities in how hospitals assess biases in AI healthcare tools. While many hospitals utilize AI for predictive modeling and administrative tasks, only 44% actively evaluate these systems for bias. This inconsistency raises concerns about equitable treatment and patient safety across different healthcare institutions.
The study indicates that wealthier hospitals are more likely to have the resources to develop and assess custom AI models, creating a digital divide in healthcare. Future research aims to explore the use of AI applications like ambient scribes and chatbots, potentially influencing policy and best practices in healthcare AI. The backing from the U.S. Department of Health and Human Services underscores the importance of addressing these disparities.
• Only 44% of hospitals assess AI tools for potential biases.
• Wealthier hospitals can afford better AI systems and evaluations.
AI refers to the simulation of human intelligence in machines, which is increasingly used in healthcare.
Predictive models use historical data to forecast future outcomes, commonly applied in patient health assessments.
The digital divide highlights the gap between hospitals with advanced technology and those relying on basic solutions.
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.
