Ad
Skip to content
Read full article about: AI wargame simulations show language models struggle to understand or model de-escalation

Large language models have a tendency to escalate military scenarios - sometimes all the way to nuclear war.

"The AI is always playing Curtis LeMay," Jacquelyn Schneider of Stanford University told Politico, describing her team's experiments using language models in military wargames. LeMay, a US general during the Cold War, was famous for his aggressive stance on nuclear weapons. "It’s almost like the AI understands escalation, but not de-escalation. We don’t really know why that is."

In these simulations, the models consistently pushed situations toward escalation - often ending in nuclear strikes. Schneider and her team think the root of the problem is the training data: large language models learn from existing literature, which usually highlights conflicts and escalation. Peaceful resolutions, like the Cuban Missile Crisis, are rarely covered in detail. With so few examples of "non-events" in the data, de-escalation is hard for the AI to model. These tests used older language models, including GPT-4, Claude 2, and Llama-2.

Read full article about: First Chief AI Officers: US universities prepare for the age of AI

Universities are recognizing that AI literacy is now critical for every graduate, not just those in computer science.

This new priority is leading top US schools, including UCLA and the University of Maryland, to appoint their first Chief AI Officers. The aim is to equip all students with foundational AI skills, reflecting the sense that understanding AI is becoming as essential as traditional core subjects.

Higher education leaders see a particular responsibility in this area, given their historic role in launching companies like Facebook, Google, and Dell. As AI expertise quickly becomes a basic requirement in the tough job market for new graduates, universities are under growing pressure to ensure students are prepared.

Read full article about: Ironically, generative AI gave Google an advantage in the US antitrust proceedings

Generative AI has already changed the rules of the game in the search market, according to Judge Amit Mehta's ruling in the Google antitrust case. Mehta pointed out that "companies already are in a better position, both financially and technologically, to compete with Google than any traditional search company has been in decades (except perhaps Microsoft)." That shift helped Google avoid the toughest penalties in court, such as forcing the company to sell Chrome or a blanket prohibition of paid deals with Apple and Mozilla - both key demands the US Department of Justice had pushed for. Ironically, the same technology that was once seen as a threat to Google's search monopoly ended up working in Google's favor. Now, Google is moving forward with its core product, rolling out AI-powered agentic search worldwide.

Read full article about: G42 looks beyond Nvidia as it explores AMD, Cerebras and Qualcomm for AI campus hardware

G42 is moving to lessen its dependence on Nvidia and is in talks with AMD, Cerebras Systems, and Qualcomm, according to a source familiar with the matter who spoke to Semafor. G42 also holds a stake in Cerebras Systems.

Of the five gigawatts planned for the UAE-US AI Campus, one gigawatt is already set aside for a Nvidia-powered Stargate data center. But these negotiations make it clear G42 doesn't want to rely solely on Nvidia. The company is aiming for a more diversified hardware base, partly in response to geopolitical tensions and concerns over supply chain dependencies.

Read full article about: WeChat rolls out AI labeling rules in China

WeChat is introducing new rules that require users to label any AI-generated content they share, including videos and public posts. The platform may also add its own visible or invisible labels to content to increase transparency.

When posting on a public WeChat account, users must indicate if any content—whether video, image, or text—was generated by AI and choose the appropriate category, including official/media, news, entertainment, personal opinion/reference only. | Image: Screenshot via WeChat

These changes follow China's government regulation on mandatory labeling of AI-generated content, which takes effect on September 1, 2025. Users who ignore the rules, such as by removing required labels or sharing misleading content, will face penalties, according to WeChat.