Ad
Skip to content
Read full article about: OpenAI is building a $200 to $300 smart speaker that tells you when to go to bed

OpenAI's first smart speaker is expected to land between $200 and $300. According to The Information, the device packs a camera and facial recognition for purchases. It uses video to scan its surroundings and serve up proactive suggestions, like telling you to hit the sack early before a big meeting. A court filing from Vice President Peter Welinder puts the earliest ship date at February 2027.

The company's 200-plus-person hardware team is reportedly building out a whole product lineup. That includes smart glasses (mass production no earlier than 2028), prototypes of a smart lamp with no clear launch timeline, and an audio wearable called "Sweetpea" that's gunning for AirPods. There's also a stylus called "Gumdrop" in the works. Foxconn is reportedly handling manufacturing for the hardware lineup.

CEO Sam Altman has teased at least one device reveal for 2026. OpenAI isn't alone in this race. Companies like Meta and Apple are making similar bets on AI hardware as the next big computing platform.

Read full article about: Nvidia reportedly set to invest $30 billion in OpenAI

Nvidia is close to investing $30 billion in OpenAI, Reuters reports, citing a person familiar with the matter. The investment is part of a funding round in which OpenAI aims to raise more than $100 billion total - a deal that would value the ChatGPT maker at roughly $830 billion, making it one of the largest private fundraises in history.

SoftBank and Amazon are also expected to participate in the round. OpenAI plans to spend a significant portion of the new capital on Nvidia chips needed to train and run its AI models.

According to the Financial Times, the investment replaces a deal announced in September, under which Nvidia was set to provide up to $100 billion to support OpenAI's chip usage in data centers. That original agreement took longer to finalize than expected.

Read full article about: New benchmark shows AI agents can exploit most smart contract vulnerabilities on their own

OpenAI and crypto investment firm Paradigm have built EVMbench, a benchmark that measures how well AI agents can find, fix, and exploit security vulnerabilities in Ethereum smart contracts. The dataset covers 120 vulnerabilities drawn from 40 real-world security audits.

In the most realistic test setup, AI agents interact with a local blockchain and have to carry out attacks entirely on their own.

The top-performing model, GPT-5.3-Codex, successfully exploited 72 percent of the vulnerabilities and fixed 41.5 percent. For detection, Claude Opus 4.6 came out ahead at 45.6 percent.

The biggest challenge for the AI agents isn't exploiting or fixing vulnerabilities - it's finding them in large codebases, the researchers say. When agents were given hints about where a vulnerability was located, exploit success rates jumped from 63 to 96 percent, and fix rates climbed from 39 to 94 percent.

With over $100 billion locked in smart contracts, the authors see both an opportunity for better security and a growing risk if these capabilities fall into the wrong hands.

Read full article about: OpenClaw developer Peter Steinberger joins OpenAI to build AI agents

Peter Steinberger, the developer behind the open-source project OpenClaw, is joining OpenAI. His focus will be on building the next generation of personal AI agents. OpenAI CEO Sam Altman called Steinberger a "genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people." Altman expects this work to quickly become a core part of OpenAI's product lineup.

OpenClaw, Steinberger's original hobby project, which blew up over the past few weeks, will "live in a foundation as an open-source project" and will be supported by OpenAI, Altman says, calling the future "extremely multi-agent."

Steinberger writes in his blog that he spoke to several large AI labs in San Francisco but ultimately chose OpenAI because they shared the same vision. Steinberger's goal: building an agent that even his mother can use. Getting there, he says, requires fundamental changes, more security research, and access to the latest models.

What I want is to change the world, not build a large company and teaming up with OpenAI is the fastest way to bring this to everyone.

Peter Steinberger
Read full article about: Google and OpenAI complain about distillation attacks that clone their AI models on the cheap

Google and OpenAI are complaining about data theft—yes, you read that right. According to Google, Gemini was hit with a massive cloning attempt through distillation, with a single campaign firing over 100,000 requests at the model, NBC News reports. Google calls it intellectual property theft, pointing to companies and researchers chasing a competitive edge.

Meanwhile, OpenAI has sent a memo to the US Congress accusing DeepSeek of using disguised methods to copy American AI models. The memo also flags China's energy buildout, ten times the new electricity capacity the US added by 2025, and confirms ChatGPT is growing at around ten percent per month.

Distillation floods a model with targeted prompts to extract its internal logic, especially its "reasoning steps," then uses that knowledge to build a cheaper clone, potentially skipping billions in training costs. Google security head John Hultquist warns smaller companies running their own AI models face the same risk, particularly if those models were trained on sensitive business data.

Anthropic CEO Dario Amodei suggests OpenAI doesn't "really understand the risks they're taking"

Anthropic’s revenue has grown 10x year over year, and CEO Dario Amodei believes Nobel Prize-level AI is maybe just a year or two away. So why isn’t he going all in on compute? Because being off by even one year could mean bankruptcy, and he’s not sure his competitors have done the math.

Read full article about: OpenAI is retiring GPT-4o and three other legacy models tomorrow, likely for good

OpenAI is dropping several older AI models from ChatGPT on February 13, 2026: GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini. The models will stick around in the API for now. The company says it comes down to usage: only 0.1 percent of users still pick GPT-4o on any given day.

There's a reason OpenAI is being so careful about GPT-4o specifically: the model has a complicated past. OpenAI already killed it once back in August 2025, only to bring it back for paying subscribers after users pushed back hard. Some people had grown genuinely attached to the model, which was known for its complacent, people-pleasing communication style. OpenAI addresses this head-on at the end of the post:

We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.

OpenAI

OpenAI points to GPT-5.1 and GPT-5.2 as improved successors that incorporate feedback from GPT-4o users. People can now tweak ChatGPT's tone and style, things like warmth and enthusiasm. But that probably won't be enough for the GPT-4o faithful.

Pentagon pushes AI companies to deploy unrestricted models on classified military networks

The Pentagon is pressing leading AI companies including OpenAI, Anthropic, Google, and xAI to make their AI tools available on classified military networks – without the usual usage restrictions.