Ant Group has developed AI training techniques using Chinese-made semiconductors, achieving a 20% cost reduction. The company utilized domestic chips from Alibaba and Huawei, employing the Mixture of Experts machine learning approach. This innovation positions Ant in a competitive landscape against US firms, particularly in light of restrictions on advanced Nvidia chips.
Ant's research indicates that its models can outperform some benchmarks set by Meta, suggesting significant advancements in Chinese AI capabilities. The company aims to enhance its AI services in healthcare and finance, leveraging its recent breakthroughs. By making its models open source, Ant is contributing to the broader AI ecosystem while addressing the challenges posed by high costs of traditional training methods.
• Ant Group claims 20% cost reduction in AI model training using local chips.
• Ant's models reportedly outperform Meta's in specific benchmarks.
This machine learning approach divides tasks into smaller data sets for efficiency.
LLMs are AI models designed to understand and generate human-like text based on vast data.
Tokens are units of information that models use to learn and respond to queries.
Ant Group is focused on developing AI solutions, particularly in healthcare and finance.
Alibaba provides domestic chips that support Ant Group's AI model training efforts.
Isomorphic Labs, the AI drug discovery platform that was spun out of Google's DeepMind in 2021, has raised external capital for the first time. The $600
How to level up your teaching with AI. Discover how to use clones and GPTs in your classroom—personalized AI teaching is the future.
Trump's Third Term? AI already knows how this can be done. A study shows how OpenAI, Grok, DeepSeek & Google outline ways to dismantle U.S. democracy.
Sam Altman today revealed that OpenAI will release an open weight artificial intelligence model in the coming months. "We are excited to release a powerful new open-weight language model with reasoning in the coming months," Altman wrote on X.