Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: Nvidia's Huang disputes Anthropic CEO's claim that AI will eliminate half of entry-level office jobs

Nvidia CEO Jensen Huang is pushing back against Anthropic CEO Dario Amodei, adding to a week of criticism already aimed at Amodei by Meta's AI chief researcher Yann LeCun. Speaking at VivaTech in Paris, Huang disagreed with Amodei's claim that AI could replace half of all entry-level office jobs within five years. Huang also accused Amodei of portraying AI as so dangerous that only Anthropic could develop it responsibly, while at the same time painting it as so expensive and powerful that others should be shut out. Instead, Huang called for a more open approach to AI development.

If you want things to be done safely and responsibly, you do it in the open … Don’t do it in a dark room and tell me it's safe.

Jensen Huang

LeCun, for his part, echoed Huang's remarks and renewed his criticism of Amodei.

Read full article about: Google Deepmind launches Weather Lab to test AI models for tropical cyclone forecasting

Google Deepmind and Google Research have launched Weather Lab, a public platform that tests AI models for forecasting tropical cyclones. The new system uses a type of machine learning called stochastic neural networks to predict storm formation, path, strength, size and shape up to 15 days ahead. Deepmind says its model produced more accurate results in tests than traditional physics-based systems such as ECMWF's ENS and NOAA's HAFS. Forecasts are being reviewed by experts at the U.S. National Hurricane Center and Colorado State's CIRA. Weather Lab is intended as a research tool and does not replace official warnings. Users can also explore forecasts for past storms.

Image: Google Deepmind
Read full article about: Mistral AI launches Mistral Compute to deliver private AI infrastructure for European institutions

Mistral AI has launched Mistral Compute, a new AI platform offering private infrastructure for governments, companies, and research institutions. It includes server hardware with Nvidia graphics processors, training tools, and programming interfaces, and runs in a data center in Essonne, France, using eighteen thousand Nvidia Grace Blackwell chips, allowing users to run their own artificial intelligence models without relying on American or Chinese cloud providers. Mistral says the platform follows European data protection rules and is one of the largest AI infrastructure projects in Europe. Launch partners include BNP Paribas, Thales, and Black Forest Labs.

The infographic shows four orange-red bars representing the Mistral Compute Service tiers: Bare Metal (pure capacity access), Managed Infra (virtualization in a private environment), Private AI Studio (developer environment with APIs in private clusters), and AI Studio (APIs and services for quick start). At t
The four service levels of Mistral Compute range from Bare Metal, offering full control, to AI Studio, designed for a fast start in AI development. | Image: Mistral
Read full article about: OpenAI hits $10 Billion in annual revenue

OpenAI's Annual Recurring Revenue (ARR) has hit $10 billion, calculated by multiplying its current monthly revenue by 12, according to a company spokesperson. The total comes from ChatGPT subscriptions and API sales, but does not include Microsoft licensing or custom contracts. Internal forecasts seen by The Information suggest OpenAI could earn $13 billion a year by 2025. The company is aiming for $174 billion by 2030. Under its deal with Microsoft, OpenAI gives up 20% of its revenue. In 2024, annual recurring revenue was $5.5 billion.

Comment Source: CNBC
Read full article about: A single ChatGPT query uses as much energy as a Google search did in 2009

In a romanticized essay on the supposedly imminent singularity, OpenAI CEO Sam Altman mentions an interesting fact about ChatGPT's energy consumption: a single ChatGPT request consumes an average of 0.34 watt-hours of power and 0.000085 gallons of water. That energy usage is roughly on par with what a Google search consumed back in 2009, but Altman excludes the fact that ChatGPT is likely handling far more requests per person. And with new AI models like multimodal systems, agents, and advanced reasoning engines demanding even more compute, the rapid expansion of data centers suggests the overall energy appetite of these systems is only increasing.

Read full article about: OpenAI postpones open-weight AI until summer due to "unexpected and quite amazing" progress

OpenAI is delaying its first open-weight language model since GPT-2 until later this summer, CEO Sam Altman said on X. Originally planned for release before the end of June, the model will include reasoning capabilities. Altman said the research team made unexpected progress that now requires more time, calling the result "very worth the wait." The model was first announced in April.

Read full article about: Disney and Universal sue Midjourney over AI copies of trademarked characters

Disney and Universal have filed a joint lawsuit against AI image generator Midjourney for allegedly creating unauthorized images of characters like Darth Vader and the Minions. The complaint, filed in the U.S. District Court in California, accuses Midjourney of repeatedly copying copyrighted material despite previous requests from the studios to stop. Both companies are seeking damages, a jury trial, and an order to prevent future use of protected characters. Reported cases of such copyright issues date back to 2023. Midjourney has not responded publicly.

"Midjourney is the quintessential copyright free-rider and a bottomless pit of plagiarism."

From the complaint