Ad
Skip to content

Man who firebombed Sam Altman's home was likely driven by AI extinction fears

A man threw a firebomb at OpenAI CEO Sam Altman’s San Francisco home in the middle of the night. The suspect was a member of the PauseAI Discord server and had posted online about AI driving humanity to extinction.

Read full article about: OpenAI employee tries to explain usage limits of the new ChatGPT Pro plans

OpenAI recently expanded its pricing options to include a $100 plan. But the company hasn't been particularly clear about how the usage limits differ from the existing $200 plan. OpenAI employee Thibault Sottiaux tried to clear things up, with an emphasis on trying.

According to Sottiaux, the $100 plan offers at least ten times the Plus usage, while the $200 plan offers at least twenty times. But both figures only reflect a temporary 2x usage boost that runs through May 31. On top of that, the $200 plan has had this boost since February, but OpenAI never explicitly documented it.

Once the boost expires at the end of May, usage could drop to at least five times and ten times Plus usage, respectively. Sottiaux didn't directly confirm these base values, though.

The confusion started because OpenAI's pricing page listed "5x or 20x usage." According to Sottiaux, the misleading labels led many users to assume the 2x boost would double both numbers to ten times and forty times. In reality, "20x" was already the boosted value for the $200 plan, while "5x" represented the base value of the cheaper plan.

OpenAI tells investors its infrastructure gives it an edge over Anthropic

OpenAI is pitching investors on the idea that its early infrastructure buildout gives it a decisive advantage over Anthropic. Meanwhile, the company is pausing its UK data center project, and Anthropic is exploring custom AI chips.

Read full article about: OpenAI is building a cybersecurity product for a select group of companies

According to Axios, OpenAI is working on a new cybersecurity product that will only be available to a small group of companies.

Axios initially reported that OpenAI was releasing a new model, drawing comparisons to Anthropic, which on Tuesday restricted access to its new Mythos Preview model to select technology and security firms because of its advanced hacking capabilities.

Axios has since corrected its reporting: the limited rollout only applies to the cybersecurity product, not OpenAI's upcoming "Spud" model.

The product will be distributed through "Trusted Access for Cyber," a pilot program OpenAI launched in February alongside the release of GPT-5.3-Codex. Participants in the program get access to especially capable models for defensive security work, backed by $10 million in API credits.

Read full article about: Musk updates OpenAI lawsuit to redirect potential $150B in damages to the nonprofit foundation

Elon Musk has updated his lawsuit against OpenAI and Microsoft. He's now asking that any damages, potentially more than $150 billion, go not to him but to OpenAI's charitable foundation. He's also pushing for the removal of CEO Sam Altman from the foundation's board, according to the Wall Street Journal. Musk's lawyer, Marc Toberoff, said Musk "is not seeking a single dollar for himself."

Musk accuses OpenAI of abandoning its charitable mission and defrauding him as a donor by exploiting its nonprofit status. He wants Altman and OpenAI President Greg Brockman to turn over their shares and financial benefits to the foundation. The trial is set to begin in April in Oakland, California.

Musk argues OpenAI betrayed the mission he helped fund. However, early interview notes show he agreed to adding a for-profit unit in 2017 and actively discussed the transition while keeping the nonprofit in place.

OpenAI called the lawsuit on X "a harassment campaign driven by ego, jealousy and a desire to slow down a competitor." The company has also asked the attorneys general of Delaware and California to investigate Musk's behavior. OpenAI is currently valued at $852 billion and planning an IPO.

Read full article about: OpenAI, Anthropic, and Google team up against unauthorized Chinese model copying

OpenAI, Anthropic, and Google have started working together to combat the unauthorized copying of their AI models by Chinese competitors, according to Bloomberg. The three companies are sharing information through the "Frontier Model Forum," founded in 2023, to detect so-called adversarial distillation. In distillation, the outputs of an existing AI model are used to train a cheaper copycat model. One of the first examples was Stanford's Alpaca model, which demonstrated the feasibility of the approach, but the practice has since become a real problem for US companies.

US authorities estimate that adversarial distillation costs American AI labs billions of dollars in lost revenue each year, Bloomberg reports. OpenAI had already warned Congress in February that Deepseek was using increasingly sophisticated methods to extract data from US models. Anthropic identified Deepseek, Moonshot, and Minimax as actors involved in the practice. The collaboration mirrors how the cybersecurity industry operates, where companies routinely share attack data with each other.

OpenAI's safety brain drain finally gets an explanation and it's just Sam Altman's vibes

“My vibes don’t really fit.” In a new New Yorker profile based on over 100 interviews, Sam Altman explains why safety researchers keep leaving OpenAI and why shifting commitments others might call deception are just part of the job.