Ad
Skip to content
Read full article about: Anthropic signs multi-gigawatt TPU deal with Google and Broadcom

Anthropic has signed a deal with Google and Broadcom for multiple gigawatts of TPU computing capacity, set to come online starting in 2027. Most of the infrastructure will be built in the United States. The company points to surging demand as the reason for the expansion: its annualized revenue rate now exceeds $30 billion, up from roughly $9 billion at the end of 2025. The number of enterprise customers generating more than $1 million in annual revenue has doubled since February, surpassing 1,000.

Anthropic trains Claude on a mix of hardware: Amazon's AWS Trainium, Google's TPUs, and Nvidia's GPUs. This makes Claude the only one of the three major AI models available across all three major cloud platforms (AWS, Google Cloud, and Microsoft Azure). That said, Anthropic notes that Amazon remains its most important cloud partner.

OpenAI's safety brain drain finally gets an explanation and it's just Sam Altman's vibes

“My vibes don’t really fit.” In a new New Yorker profile based on over 100 interviews, Sam Altman explains why safety researchers keep leaving OpenAI and why shifting commitments others might call deception are just part of the job.

Ad

Sycophantic AI chatbots can break even ideal rational thinkers, researchers formally prove

A new study by researchers from MIT and the University of Washington shows that even perfectly rational users can be drawn into dangerous delusional spirals by flattering AI chatbots. Fact-checking bots and educated users don’t fully solve the problem.

Read full article about: Telehealth startup Medvi generated billions in revenue with AI-powered fake advertising

Telehealth startup Medvi, which sells GLP-1 weight loss drugs, was featured in the New York Times as a shining example of AI-powered efficiency. The company reportedly hit $1.8 billion in revenue with just two employees, using AI primarily for marketing.

What the NYT didn't mention, though, was that Medvi apparently also used AI to create ethically questionable advertising, fake doctor profiles on social media, fabricated videos, and generated before-and-after comparisons. In short, exactly the kind of misuse AI critics have been warning about. The following video breaks it down, along with the original report from Futurism.

Medvi was initially celebrated on social media for its AI efficiency but is now being cited as a cautionary tale. Still, the case shows that AI tools can let a company scale with minimal staff, even if, in this case, the methods were ethically questionable and at least bordering on fraud. The bigger question is whether similar efficiency gains are possible for legitimate products with transparent marketing.

Comment Source: NYT
Read full article about: OpenAI reveals 600,000 weekly health queries from hospital deserts as seven in ten come after hours

OpenAI's Head of Business Finance Chengpeng Mou shared some numbers on ChatGPT's health usage. US users send about two million messages per week on health insurance topics alone, with roughly 600,000 of those coming from people in "hospital deserts," areas where the nearest hospital is at least a 30-minute drive away. Seven out of ten health queries come in outside regular office hours. All figures are based on anonymized US usage data.

Mou chimed in after Simon Smith posted on X about his family using ChatGPT to navigate his father's illness. They pooled information from different doctors and nurses into a shared ChatGPT project to make better decisions. According to Mou, stories like this aren't "edge cases."

OpenAI has been steadily pushing into healthcare, recently rolling out a dedicated health section inside ChatGPT and working to get its chatbot into more US hospitals.

Ad

Alibaba's Qwen team built HopChain to fix how AI vision models fall apart during multi-step reasoning

When AI models reason about images, small perceptual errors compound across multiple steps and produce wrong answers. Alibaba’s HopChain framework tackles this by generating multi-stage image questions that break complex problems into linked individual steps, forcing models to verify each visual detail before drawing conclusions. The approach improves 20 out of 24 benchmarks.

Ad