Ad
Skip to content
Read full article about: xAI's founder exodus reportedly tied to safety concerns and frustration over Grok's failure to catch up

xAI has lost half of its founders in recent weeks and months. Elon Musk said on X that some departures were part of a restructuring where "unfortunately we had to part with some people" to "improve speed of execution."

But former employees tell a different story. One ex-employee told The Verge that many people at the company had grown disillusioned with Grok's focus on NSFW content and its lack of safety standards. A second former employee backed that up: "There is zero safety whatsoever in the company." According to the source, Musk deliberately pushed to make the model less restricted, viewing safety measures as censorship. Among other things, Grok had generated sexualized images of children.

You survive by shutting up and doing what Elon wants.

Another common complaint is that xAI is "stuck in the catch-up phase" without shipping anything fundamentally new compared to OpenAI or Anthropic, even though they're all trying to do the same thing anyway. Several people who left are now using money from the SpaceX merger to start their own companies, including AI infrastructure startup Nuraline.

Anthropic recruits ex-Google data center veterans to build its own AI infrastructure empire

Anthropic is discussing building at least 10 gigawatts of data center capacity worth hundreds of billions of dollars, recruiting ex-Google managers and lining up Google as a financial backer to make it happen.

Read full article about: Anthropic raises $30 billion, pushing valuation to $380 billion

Anthropic has closed a $30 billion Series G funding round, bringing the AI company's post-money valuation to $380 billion.

The round was led by GIC, Singapore's sovereign wealth fund, and U.S. investment firm Coatue. D. E. Shaw Ventures, Dragoneer, Founders Fund, ICONIQ, and MGX, an Abu Dhabi-based technology investment fund, joined as co-leads. Microsoft and Nvidia also participated, building on previously announced strategic partnerships. Anthropic says it will use the capital for research, product development, and infrastructure expansion.

Anthropic reports annualized revenue of $14 billion, having grown more than tenfold in each of the past three years. Claude Code, the company's coding tool, now accounts for over $2.5 billion in annualized revenue on its own.

One notable detail about how companies are using AI: more than 500 customers spend over $1 million per year on Claude, according to Anthropic, and eight of the ten largest Fortune companies are among its users.

Read full article about: Microsoft AI CEO: "Most" white-collar tasks will be automated in 18 months

Microsoft AI CEO Mustafa Suleyman predicts the end of traditional white-collar work in 18 months.

"I think that we're going to have a human-level performance on most, if not all, professional tasks," Suleyman says in an interview with the Financial Times. "So white-collar work, where you're sitting down at a computer, either being a lawyer or an accountant or a project manager or a marketing person — most of those tasks will be fully automated by an AI within the next 12 to 18 months."

Suleyman leads Microsoft's AI division, which has invested billions in OpenAI and Anthropic and operates Copilot, one of the most widely used AI work tools. He describes the shift as already underway: In software engineering, developers are already using "AI-assisted coding for the vast majority of their code production."

Anthropic CEO Dario Amodei even predicted that half of entry-level office jobs could disappear within one to five years. He is already observing that fewer junior and mid-level employees are needed. AI could be better than humans in many areas within one to two years, while the labor market adapts with a delay.

Suleyman's boss, Microsoft CEO Satya Nadella, on the other hand, sees more of a shift where existing cognitive tasks might be automated, but new, more demanding tasks would emerge.

Comment Source: FT
Read full article about: OpenAI is retiring GPT-4o and three other legacy models tomorrow, likely for good

OpenAI is dropping several older AI models from ChatGPT on February 13, 2026: GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini. The models will stick around in the API for now. The company says it comes down to usage: only 0.1 percent of users still pick GPT-4o on any given day.

There's a reason OpenAI is being so careful about GPT-4o specifically: the model has a complicated past. OpenAI already killed it once back in August 2025, only to bring it back for paying subscribers after users pushed back hard. Some people had grown genuinely attached to the model, which was known for its complacent, people-pleasing communication style. OpenAI addresses this head-on at the end of the post:

We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.

OpenAI

OpenAI points to GPT-5.1 and GPT-5.2 as improved successors that incorporate feedback from GPT-4o users. People can now tweak ChatGPT's tone and style, things like warmth and enthusiasm. But that probably won't be enough for the GPT-4o faithful.

Read full article about: Anthropic promises to cover consumer electricity costs from new data center construction

The company plans to fully absorb grid upgrade costs, invest in new power generation, and cap its data centers' energy consumption during peak hours. Anthropic CEO Dario Amodei told NBC News that the costs of AI models should fall on Anthropic, not on citizens.

Microsoft and OpenAI made similar commitments back in January. The pledges come amid growing political pressure: New York senators introduced a bill that would pause new data center permits, and Senator Van Hollen is pushing legislation that would require AI companies to cover expansion costs themselves.

According to Politico, the Trump administration is also preparing a voluntary agreement that would commit AI companies to covering electricity price increases. The Lawrence Berkeley National Lab estimates that data centers could consume around 12 percent of all US electricity by 2028 - up from 4.4 percent in 2024.

Read full article about: OpenAI reportedly uses a "special version" of ChatGPT to hunt down internal leakers by scanning Slack and email

OpenAI uses a "special version" of ChatGPT to track down internal information leaks. That's according to a report from The Information, citing a person familiar with the matter. When a news article about internal operations surfaces, OpenAI's security team feeds the text into this custom ChatGPT version, which has access to internal documents as well as employees' Slack and email messages.

The system then suggests possible sources of the leak by identifying files or communication channels that contain the published information and showing who had access to them. It's unclear whether OpenAI has actually caught anyone using this method.

What exactly makes this version special isn't known, but there's a clue: OpenAI engineers recently presented the architecture of an internal AI agent that could serve this purpose. It's designed to let employees run complex data analysis using natural language and has access to institutional knowledge stored in Slack messages, Google Docs, and more.

Read full article about: OpenAI upgrades Responses API with features built specifically for long-running AI agents

OpenAI is adding new capabilities to its Responses API that are built specifically for long-running AI agents. The update brings three major features: server-side compression that keeps agent sessions going for hours without blowing past context limits, controlled internet access for OpenAI-hosted containers so they can install libraries and run scripts, and "skills": reusable bundles of instructions, scripts, and files that agents can pull in and execute on demand.

Skills work as a middle layer between system prompts and tools. Instead of stuffing long workflows into every prompt, developers can package them as versioned bundles that only kick in when needed. They ship as ZIP files, support versioning, and work in both hosted and local containers through the API. OpenAI recommends building skills like small command-line programs and pinning specific versions in production.

Read full article about: ByteDance turns to Samsung for custom AI chip production and scarce memory supplies

Bytedance is in talks with Samsung to produce a custom AI chip, a deal that could also give the TikTok parent company access to hard-to-get memory chips, according to Reuters.

Bytedance is developing its own AI chip for inference tasks under the codename SeedChip and is negotiating with Samsung to manufacture it, Reuters reports. What makes the deal especially interesting: the talks also cover access to memory chip supplies, which are extremely scarce amid the global AI infrastructure buildout - making the arrangement particularly valuable for Bytedance.

The company plans to receive its first sample chips by the end of March and produce at least 100,000 units this year, with a possible ramp-up to 350,000. Bytedance intends to spend more than 160 billion yuan (roughly $22 billion) on AI-related procurement in 2026 - more than half of that going toward Nvidia chips, including H200 models, and development of its own chip.

Bytedance executive Zhao Qi acknowledged during an internal meeting in January that the company's AI models still trail global leaders like OpenAI, but pledged continued support for AI development. Bytedance itself denies the chip project - a spokesperson told Reuters the information was inaccurate without providing further details.

Read full article about: OpenAI says ChatGPT update improves response style and quality

OpenAI released an update for GPT-5.2 Instant in ChatGPT and the API on February 10, 2026. The company says the update improves response style and quality, with more measured, contextually appropriate tone and clearer answers to advice and how-to questions that place the most important information up front. CEO Sam Altman addressed the scope of the changes: "Not a huge change, but hopefully you find it a little better."

The update targets the "Instant" variant, the model without reasoning steps. In the API, developers can access it via "gpt-5.2-chat-latest". In ChatGPT, users need to switch to "Instant" in the model picker. The model also kicks in automatically when GPT-5's router determines a reasoning model isn't necessary, or when users have run out of credits for heavier models, something that happens especially often on the free tier.