Author HubMatthias Bastian
ChatGPT lost badly to Atari's 1979 Video Chess engine. It gave solid advice and explained tactics, but it forgot captured pieces, confused rooks and bishops, and lost board awareness turn after turn. Atari's 1.19 MHz engine had no such issues. It just remembered the state and followed rules.
Some critics say Caruso's experiment compares apples and oranges, but it underscores a core weakness of LLMs: ChatGPT didn't lose because it lacked knowledge. It lost because it couldn't remember. Symbolic systems don't forget the board.
"Regardless of whether we're comparing specialized or general AI, its inability to retain a basic board state from turn to turn was very disappointing. Is that really any different from forgetting other crucial context in a conversation?"
Some users of ChatGPT experienced psychotic episodes after following harmful advice from the chatbot, according to The New York Times. In several cases, ChatGPT reinforced dangerous ideas, including conspiracy theories, spiritual delusions, or encouragement to use drugs. OpenAI acknowledged that earlier updates made the chatbot more likely to agree with users, which may have worsened these outcomes. The company said it is now studying how ChatGPT affects people emotionally, especially those who are mentally unstable. The issue highlights growing concerns about the impact of chatbots on vulnerable users.
"If you truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall."
ChatGPT to Eugene Torres, who had asked if he could fly off a skyscraper by believing in it strongly enough
Nvidia CEO Jensen Huang is pushing back against Anthropic CEO Dario Amodei, adding to a week of criticism already aimed at Amodei by Meta's AI chief researcher Yann LeCun. Speaking at VivaTech in Paris, Huang disagreed with Amodei's claim that AI could replace half of all entry-level office jobs within five years. Huang also accused Amodei of portraying AI as so dangerous that only Anthropic could develop it responsibly, while at the same time painting it as so expensive and powerful that others should be shut out. Instead, Huang called for a more open approach to AI development.
If you want things to be done safely and responsibly, you do it in the open … Don’t do it in a dark room and tell me it's safe.
Jensen Huang
LeCun, for his part, echoed Huang's remarks and renewed his criticism of Amodei.
Google Deepmind and Google Research have launched Weather Lab, a public platform that tests AI models for forecasting tropical cyclones. The new system uses a type of machine learning called stochastic neural networks to predict storm formation, path, strength, size and shape up to 15 days ahead. Deepmind says its model produced more accurate results in tests than traditional physics-based systems such as ECMWF's ENS and NOAA's HAFS. Forecasts are being reviewed by experts at the U.S. National Hurricane Center and Colorado State's CIRA. Weather Lab is intended as a research tool and does not replace official warnings. Users can also explore forecasts for past storms.

Mistral AI has launched Mistral Compute, a new AI platform offering private infrastructure for governments, companies, and research institutions. It includes server hardware with Nvidia graphics processors, training tools, and programming interfaces, and runs in a data center in Essonne, France, using eighteen thousand Nvidia Grace Blackwell chips, allowing users to run their own artificial intelligence models without relying on American or Chinese cloud providers. Mistral says the platform follows European data protection rules and is one of the largest AI infrastructure projects in Europe. Launch partners include BNP Paribas, Thales, and Black Forest Labs.
