Ad
Short

Qualcomm is making its debut in the data center hardware market with two new AI accelerator chips, the AI200, set for release in 2026, and the AI250, expected in 2027. Designed for liquid-cooled server racks, the chips focus on AI inference—running pre-trained models—rather than training them. Until now, Qualcomm has been best known for its mobile processors.

The move puts Qualcomm in direct competition with Nvidia and AMD. According to the company, the new chips are designed to offer advantages in power efficiency, cost, and memory capacity, supporting up to 768 GB per card. Early large-scale customers are already on board, including a Saudi operator planning deployments with energy demands of up to 200 megawatts. Following the announcement, Qualcomm’s stock price rose by 15 percent.

Ad
Ad
Short

China's military uses domestic AI models like Deepseek and Alibaba's Qwen for autonomous weapons, report says.

A Reuters analysis shows that China's People's Liberation Army is systematically integrating artificial intelligence from domestic companies such as Deepseek and Alibaba into military systems. Hundreds of research papers, patents, and procurement documents point to widespread use of AI for battlefield automation. The projects include robotic dogs, drone swarms with autonomous target recognition, and real-time combat analysis.

According to Reuters, Chinese military institutions also continue to use Nvidia hardware, including A100 chips that fall under US export restrictions. Thirty-five patent filings reference these components.

Several of the army's procurement documents specifically mention Deepseek, while only one cites Alibaba's Qwen model. Researchers at Xi'an Technological University reported that their Deepseek-based system can analyze 10,000 combat scenarios in 48 seconds—a task that would take traditional planning teams 48 hours. The US State Department recently warned that Deepseek plays a role in supporting China's military and intelligence operations.

Short

Reddit set a trap for the AI search company Perplexity to prove that it was scraping content from Google Search.

As part of a lawsuit against several data-scraping firms, Reddit accuses Perplexity of illegally using its content. To back up the claim, Reddit ran a targeted test. According to the lawsuit, Reddit created a "test post" that could "only be crawled by Google’s search engine and was not otherwise accessible anywhere on the internet." Within a few hours, the content from this post appeared in Perplexity's search results, which Reddit says demonstrates that Perplexity was scraping Google's search results.

This incident is just one part of a broader fight, as platforms like Reddit try to block the unauthorized use of their data to train AI models. "A.I. companies are locked in an arms race for quality human content — and that pressure has fueled an industrial-scale ‘data laundering’ economy," said Ben Lee, Reddit's Chief Legal Officer.

Ad
Ad
Short

OpenAI's new browser, ChatGPT Atlas, could pose security risks, according to the company’s head of security, Dane Stuckey.

One of the biggest issues involves so-called prompt injections. In these attacks, malicious instructions are hidden on websites or in emails to manipulate the AI agent. The effects can range from influencing purchasing decisions to stealing private data such as email contents or login credentials.

Stuckey said OpenAI has run extensive tests, introduced new training methods, and built in protective mechanisms. Still, prompt injection remains an unresolved security challenge. To reduce risks, Atlas includes a "logged out mode" that prevents access to user data and a "watch mode" for sensitive websites, which requires active user supervision. Stuckey added that OpenAI is developing additional security features and faster response systems to handle potential attacks.

Ad
Ad
Short

Adobe is launching AI Foundry, a service for companies that want to build their own generative AI models. The platform uses Adobe's Firefly models, which are trained entirely on licensed data. With AI Foundry, organizations can develop custom models for text, images, video, and 3D content based on their own brand assets and intellectual property.

According to Hannah Elsakr at Adobe, more businesses are looking for tailored solutions like this. AI Foundry is positioned as a legally secure alternative to other providers and aims to reduce legal risks for companies using AI. Pricing depends on usage, and one of the first customers is expected to be Walt Disney Imagineering.

Google News