Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: OpenAI's Sora burned a million dollars a day while losing half its users in record time

OpenAI's Sora app saw rapidly declining usage while costing the company around one million dollars a day, according to the Wall Street Journal. After a hyped launch, the app grew to about one million users, but that number quickly dropped to around 500,000 and never recovered.

On top of the shrinking user base, OpenAI ran into copyright issues and growing internal concerns that the cheap, low-quality engagement videos people were generating could damage the OpenAI brand. Sora proved more liability than asset. Development costs piled up too. According to the report, OpenAI canceled training runs for new video models entirely.

The real nail in Sora's coffin was increasing competitive pressure from Anthropic. OpenAI chose to redirect its limited compute toward coding, enterprise, and agent-based AI products, areas with greater long-term business value. Sora fell victim to a strategic pivot: away from complex video generation, toward the most economically promising parts of the business. The Sora team will now focus on world models for robotics. The Sora app shuts down in April, with the API following in September.

Read full article about: Mistral AI borrows 830 million dollars to operate a new data center near Paris

Mistral AI has taken out a loan of 830 million dollars. The money will fund the operation of a new data center near Paris in Bruyeres-le-Chatel. The deal lets Mistral avoid giving up any company shares, but it also saddles the startup with significant debt, a risk for both the company and the banks backing the loan, especially given that Mistral is unlikely to be profitable anytime soon.

The new facility will be equipped with 13,800 NVIDIA Grace Blackwell GB300 GPUs and deliver 44 megawatts of output capacity. A consortium of global banks is backing the loan, including Bpifrance, BNP Paribas, Credit Agricole CIB, HSBC, La Banque Postale, MUFG, and Natixis.

By the end of 2027, the French AI company plans to provide 200 megawatts of computing capacity across Europe to meet demand from governments and businesses looking to build their own AI systems. Mistral is the only European frontier AI startup positioned to benefit from growing concerns across the continent about technological dependence on the US.

OpenAI's Sam Altman and Science VP Kevin Weil hype AI-assisted dog cancer story ignoring there's no proof the vaccine worked

An Australian AI consultant used ChatGPT, AlphaFold, and Grok to find a possible treatment for his dog Rosie’s incurable cancer. The story went viral after high-profile AI executives like OpenAI’s Greg Brockman and Deepmind’s Demis Hassabis shared it as proof of what AI can already do. But there’s no evidence the AI-designed vaccine actually worked.

Read full article about: Eli Lilly signs $2.75 billion deal with AI drug developer Insilico Medicine

Eli Lilly has signed a $2.75 billion deal with Hong Kong-listed AI pharmaceutical company Insilico Medicine. The partnership aims to bring AI-developed drugs to the global market. Insilico will receive $115 million upfront, with the rest tied to regulatory and commercial milestones as well as license fees, both companies announced.

According to founder and CEO Alex Zhavoronkov, Insilico has developed at least 28 drugs using generative AI, with nearly half already in clinical trials. The two companies have been working together since 2023.

Zhavoronkov told CNBC that Lilly actually outperforms Insilico in some areas of AI. Andrew Adams of Lilly called Insilico's AI research "a powerful complement" to its own clinical development efforts. Insilico is building its AI capabilities in Canada and the Middle East, while early drug development takes place in China. Eli Lilly is also working with a Deepmind subsidiary on AI-driven medicine.

Comment Source: CNBC
Read full article about: Google's new Gemini API Agent Skill patches the knowledge gap AI models have with their own SDKs

Google has built an "Agent Skill" for the Gemini API that tackles a fundamental problem with AI coding assistants: once trained, language models don't know about their own updates or current best practices. The new skill feeds coding agents up-to-date information about current models, SDKs, and sample code. In tests across 117 tasks, the top-performing model (Gemini 3.1 Pro Preview) jumped from 28.2 to 96.6 percent success rate. Skills were first introduced late last year by Anthropic and quickly adopted by other AI companies.

Success rates of Gemini models with and without the agent skill across 117 coding tasks. Newer models in the 3 series benefit far more from the skill than older models, which Google attributes to their stronger reasoning capabilities. | Image: Google

Older 2.5 models saw much smaller improvements, which Google says comes down to weaker reasoning abilities. Interestingly, a Vercel study suggests that giving models direct instructions through AGENTS.md files could be even more effective. Google is exploring other approaches as well, including MCP services. The skill is available on GitHub.

Anthropic reportedly views itself as the antidote to OpenAI's "tobacco industry" approach to AI

Anthropic grew out of more than just concern for AI safety—it was born from a bitter power struggle and personal conflict at OpenAI. A report by Sam Altman biographer Keach Hagey reveals how personal slights, rivalries, and strategic disagreements led to what may be the most consequential split in the AI industry.

Read full article about: OpenAI sets two-stage Sora shutdown with app closing April 2026 and API following in September

OpenAI is killing Sora in two stages. The web and app version goes dark on April 26, 2026, with the Sora API following on September 24, 2026. OpenAI is urging users to download their content before the cutoff dates. Videos and images can be exported directly from the Sora library.

The company says it hasn't decided yet whether there will be a final export window after those dates. If one happens, users will get an email heads-up. Once all deadlines pass, user data gets permanently deleted. The shutdown also takes down the sora.chatgpt.com platform, which handled image and video generation. Full details are on OpenAI's help page under "What to know about the Sora discontinuation."

Sora's demise is part of a bigger strategic pivot. OpenAI wants to funnel compute toward coding tools and enterprise customers—a play that mirrors rival Anthropic—and a super app rolling ChatGPT and other tools into one package. Sora will stick around as a research project focused on world models, with the long-term goal of "automating the physical economy."

Read full article about: Google's new Gemini update makes it easy to import memories from ChatGPT and Claude

Google is borrowing Anthropic's memory import approach, letting Gemini users bring over saved reminders, preferences, and full chat histories from apps like ChatGPT and Claude. The process works by copying a suggested prompt into the previous AI app, generating a summary, and pasting it into Gemini, which saves the information in its own context. Users can also upload chat histories as a ZIP file (up to 5 GB) and continue previous conversations inside Gemini. Google is renaming "Past Chats" to "Memory," with the rollout happening gradually.

Google's new memory import feature in Gemini: users copy a prompt into their previous AI app, then paste the generated summary into Gemini. | Image: Google

Anthropic pioneered this approach after OpenAI drew criticism for a military deal Anthropic had turned down on ethical grounds. With users already looking to switch, Anthropic wanted to give them an extra reason to make the move. Both Google and Anthropic rely on the same basic method for data extraction—a simple prompt that asks the existing AI app to output everything it has stored about the user.

Read full article about: Cohere releases open source model that tops speech recognition benchmarks

Canadian AI company Cohere has released "Transcribe," a new open-source model for automatic speech recognition. The company says it claims the top spot on the Hugging Face Open ASR Leaderboard with an average word error rate of just 5.42 percent, beating out competitors like OpenAI's Whisper Large v3, ElevenLabs Scribe v2, and Qwen3-ASR-1.7B. Cohere says Transcribe also delivers the best throughput among similarly sized models.

The chart compares seven speech recognition models with more than one billion parameters. The x-axis shows accuracy as word error rate (WER), where lower values are better. The y-axis shows throughput (RTFx), measuring how fast a model processes audio relative to real time. Cohere Transcribe leads with an RTFx of 525 and a WER of about 5.4, making it both the fastest and most accurate model. NVIDIA Canary Qwen 2.5B follows with an RTFx of 418. Models like OpenAI's Whisper Large v3 and Voxtral Realtime are significantly slower and less accurate.

Cohere Transcribe compared with seven other speech recognition models. Models closer to the upper left corner perform best, meaning faster throughput and lower word error rates. | Image: CohereThe 2 billion parameter model supports 14 languages, including English, German, French, and Japanese. It's available for download under the Apache 2.0 license on Hugging Face and can also be accessed through Cohere's API and the Model Vault platform. Cohere plans to integrate Transcribe into its AI agent platform North in the future.

Read full article about: Federal judge blocks Trump's ban on Anthropic AI models, calls security risk label "Orwellian"

Anthropic has secured a preliminary injunction against the Trump administration in a federal court in San Francisco. Judge Rita Lin temporarily blocked President Trump's order banning federal agencies from using Anthropic's AI models, along with the Pentagon's classification of the company as a security risk.

Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation. [...] Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.

Rita F. Lin, United States District Judge

The dispute traces back to a failed $200 million contract. The Pentagon wanted unrestricted access to Anthropic's Claude models, but Anthropic insisted on guarantees that the models wouldn't be used for autonomous weapons or mass surveillance. Defense Secretary Pete Hegseth then classified Anthropic as a "supply chain risk" - making it the first U.S. company to receive that designation. A final ruling is still pending.