The Australian federal government is initiating consultations on AI regulation, prompted by a survey revealing that one-third of businesses utilize AI without informing customers or employees. This lack of transparency raises significant ethical concerns, particularly as half of these companies have not conducted human rights or risk assessments. The government is considering an EU-style Artificial Intelligence Act to establish minimum standards for high-risk AI applications.
Industry and Science Minister Ed Husic has proposed ten mandatory guardrails for AI use, emphasizing the need for human oversight and the ability to challenge automated decisions. The discussion paper highlights potential dangers of AI, including bias amplification and privacy breaches, with real-world examples illustrating these risks. Experts warn that Australia is falling behind in AI regulation and investment, underscoring the urgency for comprehensive policies.
The Conversation 14month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.
