OpenAI's dissatisfaction with Nvidia chips sparked Cerebras deal
The ChatGPT developer is reportedly unhappy with the speed of certain Nvidia chips and is negotiating with startups that offer alternatives.
The ChatGPT developer is reportedly unhappy with the speed of certain Nvidia chips and is negotiating with startups that offer alternatives.
The US government wants to protect American companies from rare earth supply shortages through “Project Vault.”
Mozilla is rolling out new AI settings with Firefox 148 on February 24. Users will be able to manage all the browser's generative AI features from a single location, or turn them off entirely, the company announced in a blog post.
The new settings cover translations, automatic image descriptions in PDFs, AI-powered tab grouping, link previews, and a chatbot in the sidebar. The chatbot supports services like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini, and Le Chat Mistral.
For users who want nothing to do with AI features, a single toggle blocks all AI extensions. Once enabled, no pop-ups or notifications about current or future AI features will appear. The settings persist through updates. Users who want to try the feature early can find it in Firefox Nightly.
OpenAI has released the Codex app for macOS, letting developers control multiple AI agents simultaneously and run tasks in parallel. According to OpenAI, it's easier to use than a terminal, making it accessible to more developers. Users can manage agents asynchronously across projects, automate recurring tasks, and connect agents to external tools via "skills." They can also review and correct work without losing context.
The Codex Mac app is available for ChatGPT Plus, Pro, Business, Enterprise, and Edu accounts. OpenAI is also doubling usage limits for paid plans. The app integrates with the CLI, IDE extension, and cloud through a single account. Free and Go users can try it for a limited time—likely a response to Claude Code's success with knowledge workers and growing demand for agentic systems (see Claude Cowork) that handle more complex tasks than standard chatbots.
The AI boom could cost Apple up to $57 more per iPhone – for memory chips alone.
"The rate of increase in the price of memory is unprecedented," says Mike Howard, an analyst at TechInsights, speaking to the Wall Street Journal. By the end of this year, the price of DRAM will quadruple from 2023 levels, and NAND will more than triple.
For Apple, the numbers are stark: The base-model iPhone 18 due this fall could cost $57 more in memory alone compared with the current iPhone 17 – a significant hit to profit margins on a device that retails for $799. This aligns with recent rumors that Apple may delay the base model's release to later this year.
The cause: AI companies like OpenAI, Google, and Meta are now outbidding Apple for scarce components. Nvidia has even overtaken Apple as TSMC's largest customer, a position Apple held for years.
Jerry Tworek, one of the minds behind OpenAI's reasoning models, sees a fundamental problem with current AI: it can't learn from mistakes. "If they fail, you get kind of hopeless pretty quickly," Tworek says in the Unsupervised Learning podcast. "There isn't a very good mechanism for a model to update its beliefs and its internal knowledge based on failure."
The researcher, who worked on OpenAI's reasoning models like o1 and o3, recently left OpenAI to tackle this problem. "Unless we get models that can work themselves through difficulties and get unstuck on solving a problem, I don't think I would call it AGI," he explains, describing AI training as a "fundamentally fragile process." Human learning, by contrast, is robust and self-stabilizing. "Intelligence always finds a way," Tworek says.
Other scientists have described this fragility in detail. Apple researchers recently showed that reasoning models can suffer a "reasoning collapse" when faced with problems outside of the patterns they learned in training.