Ad
Skip to content
Read full article about: OpenAI's first AI device won't arrive until 2027 as company ditches "io" branding

OpenAI won't be using the name "io" for its planned AI hardware devices. That's according to a court filing submitted as part of a trademark lawsuit brought by audio startup iyO, Wired reports. OpenAI had already scrubbed references to the project back in June 2025.

OpenAI VP Peter Welinder said the company reviewed its naming strategy and decided against "io." OpenAI also revealed that its first hardware device won't ship until the end of February 2027 at the earliest - later than previously indicated. No packaging or marketing materials exist yet.

OpenAI acquired the hardware startup from former Apple designer Jony Ive for $6.5 billion in May 2025. Over the weekend, a fake Super Bowl ad allegedly showing OpenAI's device made the rounds online. OpenAI spokesperson Lindsay McCallum told Wired the company had nothing to do with it.

Read full article about: Anthropic's head of Safeguards Research warns of declining company values on departure

Anthropic is starting to feel the OpenAI effect. Growing commercialization and the need to raise billions of dollars is forcing the company into compromises, from accepting money from authoritarian regimes and working with the US Department of Defense and Palantir to praising Donald Trump. Now Mrinank Sharma, head of the Safeguards Research Team—the group responsible for keeping AI models safe—is leaving. In his farewell post, he suggests Anthropic has drifted away from its founding principles.

Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.

Mrinank Sharma

The Oxford-educated researcher says the time has come to move on. His departure echoes a pattern already familiar at OpenAI, which saw its own wave of safety researchers leave over concerns that the company was prioritizing revenue growth over responsible deployment. Anthropic was originally founded by former OpenAI employees who wanted to put AI safety first, making Sharma's exit all the more telling.

The new Gemini-based Google Translate can be hacked with simple words

A simple prompt injection trick can turn Google Translate into a chatbot that answers questions and even generates dangerous content, a direct consequence of Google switching the service to Gemini models in late 2025.

Read full article about: Investors believe AI will replace labor costs instead of just software

Investors are betting that AI will replace labor costs, not software budgets.

"We took a view that AI is not 'enterprise' software in the traditional sense of going after IT budgets: it captures labour spend, at some point you’re taking over human workflows end to end," Sebastian Duesterhoeft, a partner at Lightspeed Venture Partners, told the Financial Times.

This logic underpins the current funding round valuing Anthropic at $350 billion: While classic SaaS solutions compete for limited IT budgets, "agentic AI" systems target the far larger pool of labor costs.

The explosive nature of this shift has already been felt in the markets. A series of developments—including new models, specialized industry tools, and news that Goldman Sachs plans to automate banking roles—collectively helped trigger a sell-off in public markets for traditional software stocks. According to the FT, investors are increasingly realizing that autonomous AI agents could threaten existing business models.

Comment Source: FT

Nvidia CEO Jensen Huang claims AI no longer hallucinates, apparently hallucinating himself

Nvidia CEO Jensen Huang claims in a CNBC interview that AI no longer hallucinates. At best, that’s a massive oversimplification. At worst, it’s misleading. Either way, nobody pushes back, which says a lot about the current state of the AI debate.

Japan's lower house election becomes a testing ground for generative AI misinformation

AI-generated fake videos are spreading rapidly across Japanese social media during the lower house election campaign. In a survey, more than half of respondents believed fake news to be true. But Japan is far from the only democracy facing this problem.

A new platform lets AI agents pay humans to do the real-world work they can't

On Rentahuman.ai, AI agents can hire people for real-world tasks, from holding signs to picking up packages. It sounds absurd, but it shows what happens when language models stop just talking and start taking action.