The Trump administration is moving to overturn the Biden administration's AI chip export restrictions, according to a spokesperson for the US Department of Commerce. Biden's rules, which were set to take effect on May 15, would have split the world into three different zones for AI chip exports, capping shipments to most countries. Trump officials say the current system is "overly complex, overly bureaucratic," and want to replace it with "a much simpler rule that unleashes American innovation and ensures American AI dominance." One idea on the table is a global licensing system based on intergovernmental agreements, but there is no set timeline for new regulations yet. Since taking office, the Trump administration has already rolled back several other Biden-era AI policies.
Author HubMatthias Bastian
Microsoft's Phi 4 model generates 56 sentences before responding to "Hi", developer Simon Willison found. This behavior, known as "overthinking", was confirmed by Microsoft's Dimitris Papailiopoulos, who says it's problematic for simple tasks but intentional for complex ones. He plans to address the issue. Microsoft released the open Phi 4 reasoning models in early May.

OpenAI's recent move to tweak its planned for-profit restructuring hasn't changed Elon Musk's stance. According to Musk's attorney Marc Toberoff, the adjustment is just a "transparent dodge" and does nothing to address concerns that OpenAI's nonprofit assets are still being used to benefit private interests—including Sam Altman, investors, and Microsoft. Musk previously tried to block the restructuring in court and made a $97.4 billion bid to take over OpenAI, but those efforts failed. Parts of the lawsuit are still ongoing.
The founding mission remains betrayed.
Marc Toberoff, Musk’s lead counsel
Anthropic has introduced a new "AI for Science" initiative that offers up to $20,000 per month in API usage credits to selected researchers. According to the company, applicants are evaluated using objective criteria and must pass a biosecurity review. The program is open to individuals over 18 and research teams from academic institutions, except applicants from countries including China, Russia, and Iran. Anthropic says it may also reject applications that violate its rules. Full details and eligibility requirements are available on the Anthropic website.
New York City's Metropolitan Transportation Authority (MTA) plans to use AI to automatically detect suspicious activity on subway platforms and alert police. According to MTA head of security Michael Kemper, the goal is "predictive prevention." The software analyzes live feeds from surveillance cameras, but will not use facial recognition, MTA spokesperson Aaron Donovan said. The move comes after a series of attacks on the subway. Civil liberties groups, including the NYCLU, have criticized the plan as excessive. NYCLU policy counsel Justin Harrison warned that AI systems are prone to mistakes and could worsen existing inequalities. The MTA has now installed surveillance cameras on every subway platform and inside every train car, with about 40 percent of them monitored in real time.