Researchers at Apple Computer Company have revealed that AI responses from large language models (LLMs) are often misleading. Their study indicates that these models lack true logical reasoning capabilities, despite appearing intelligent. By testing various LLMs, they demonstrated that even minor changes in questions could lead to incorrect or nonsensical answers.
The research emphasizes the importance of logical reasoning in understanding context, as illustrated by a simple question about counting apples. The findings suggest that LLMs do not genuinely comprehend inquiries but instead rely on learned patterns to generate responses. This raises critical questions about the reliability of AI in tasks requiring nuanced understanding.
• Apple researchers find AI responses often lack genuine understanding.
• Testing reveals LLMs struggle with logical reasoning and context.
LLMs are AI systems designed to generate human-like text based on input data.
Logical reasoning refers to the ability to analyze information and draw valid conclusions.
Machine learning involves algorithms that allow systems to learn from data and improve over time.
Apple is a technology company known for its innovations in hardware and software, including AI research.
Tech Xplore on MSN.com 8month
Tom's Guide on MSN.com 9month
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.