Find the latest for UM6P company news
"RAM is proud to introduce the Medusa™ Array family of products to meet the growing photonic interface need in AI and DC infrastructure," said John Marciante, CEO of RAM Photonics. "Our automated manufacturing platform ensures unmatched precision and industry record yields, positioning RAM as a leader in high-volume fiber interconnects."
Micron's shares fell 8% on Friday, as its dour margin forecast took the shine off a robust quarterly revenue outlook driven by demand for its semiconductors used in artificial intelligence tasks.
DDR5 memory chip samples this week, and it says this is part of its contribution to systems that keep up with AI.
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Intel is launching its new Intel Xeon 6 processors with performance-cores,
Sonar is built on top of Meta's open-source Llama 3.3 70B. It is powered by Cerebras Inference, which claims to be the world's fastest AI inference engine. The model is capable of producing 1200 tokens per second.
OpenAI has launched a new 'reasoning' AI model, o3-mini, the successor to the AI startup's o1 family of reasoning models.
On Friday, OpenAI made o3-mini, the company's most cost-efficient AI reasoning model so far, available in ChatGPT and the API. OpenAI previewed the new reasoning model last December, but now all ChatGPT Plus, Team, and Pro users have full access to o3-mini.
Thanks to DeepSeek, OpenAI has released its frontier o3-mini model for free to all ChatGPT users. ChatGPT Plus users get the o3-mini-high model.