Ad
Skip to content
Read full article about: OpenAI loses top AI researcher Jerry Tworek after seven years

OpenAI is losing yet another senior researcher: Jerry Tworek is out after nearly seven years at the company. Tworek shared the news in a message to his team. He was a key player in building GPT-4, ChatGPT, and OpenAI's first AI coding models, while also helping push new scaling boundaries. Most recently, he ran the "Reasoning Models" team, working on AI systems that can handle complex logical reasoning. He was part of the core group behind the o1 and o3 models, the foundation for much of OpenAI's recent AI progress.

Tworek says he wants "to try and explore types of research that are hard to do at OpenAI." That sounds like a not-so-subtle dig at CEO Sam Altman's relentless focus on products and revenue, which has reportedly been causing tension among researchers. No word yet on where Tworek is headed next.

Read full article about: Abu Dhabi's TII claims its Falcon H1R 7B reasoning model matches rivals seven times its size

The Technology Innovation Institute (TII) from Abu Dhabi has released Falcon H1R 7B, a compact reasoning language model with 7 billion parameters. TII says the model matches the performance of competitors two to seven times larger across various benchmarks, though as always, benchmark scores only loosely correlate with real-world performance, especially for smaller models. Falcon H1R 7B uses a hybrid Transformer-Mamba architecture, which lets it process data faster than comparable models.

Falcon H1R 7B scores 49.5 percent across four benchmarks, outperforming larger models like Qwen3 32B (46.2 percent) and Nemotron H 47B Reasoning (43.5 percent). | Image: Technology Innovation Institute (TII)

The model is available as a complete checkpoint and quantized version on Hugging Face, along with a demo. TII released it under the Falcon LLM license, which allows free use, reproduction, modification, distribution, and commercial use. Users must follow the Acceptable Use Policy, which TII can update at any time.

Read full article about: More than five percent of ChatGPT messages worldwide are about health

More than five percent of all messages sent through ChatGPT worldwide deal with health topics. According to a report OpenAI shared exclusively with Axios, 40 million Americans use the chatbot daily for medical questions. Users ask it to explain medical bills, compare insurance plans, or check symptoms, often because they can't get in to see a doctor right away. OpenAI spotted this trend early and marketed GPT-5 as particularly capable for these kinds of use cases.

The report shows OpenAI now handles nearly two million insurance-related questions per week. The surge came after the Trump administration let long-standing health insurance subsidies expire at the start of the new year.

Using ChatGPT for medical advice comes with serious risks. The models still hallucinate, and many users likely rely on weaker model versions without reasoning capabilities, especially when chatting directly with the AI in voice mode, which uses a lighter model for faster responses. OpenAI's newly released promotional video doesn't mention any of these concerns.

Read full article about: Only 5 percent of ChatGPT's 900 million weekly users pay, and reportedly most aren't worth much to advertisers

Almost 90 percent of ChatGPT's roughly 900 million weekly users live outside the USA and Canada, according to The Information, citing data from tracking platform Sensor Tower. This creates a challenge for OpenAI's planned advertising business, since international users generate far less revenue. At Pinterest, for example, the average revenue per user in the USA is $7.64, compared to just 21 cents elsewhere.

India and Brazil rank among the largest ChatGPT markets alongside the USA, Japan, and France. Only about five percent of users pay for subscriptions. For emerging markets like India, OpenAI offers the cheaper "ChatGPT Go" plan at around $5 per month.

OpenAI plans to generate roughly $110 billion from free users by 2030, with advertising likely playing a major role. The company needs this aggressive revenue growth to meet its data center commitments.

Read full article about: Amazon opens Alexa Plus web version for certain users in Early Access

Amazon has released the web version of its AI assistant Alexa Plus in early access for users in the US and Canada. Users can sign up at Alexa.com and use the new chatbot directly in their browser. Alexa Plus was already available on new Echo devices and recently rolled out to older Echos as well. A beta test is currently running in Germany.

The web interface lets users upload documents, emails, and images. Alexa Plus can extract information from these files - turning recipes into shopping lists or automatically adding appointments to your calendar. Amazon is also promoting features like automatic meal planning and filling Amazon Fresh carts based on dietary restrictions. Smart home devices can be controlled through the website too. Amazon is also launching a new sidebar for quick access and a redesigned mobile Alexa app.

AI tool catches pancreatic cancer in routine scans before symptoms appear

According to physician Zhu Kelei, AI has definitively saved the lives of patients whose scans were only flagged by PANDA, an AI tool developed by Alibaba researchers. The system analyzes non-contrast CT images – scans where even experienced radiologists can easily miss tumors.

Read full article about: Anthropic President Daniela Amodei says "the exponential continues until it doesn't"

"The exponential continues until it doesn't," says Anthropic President Daniela Amodei, quoting her colleagues. At Anthropic, the team believed every year that this pace couldn't possibly keep up, and yet it did, Amodei says in an interview with CNBC TV. But that's not guaranteed, she adds. Anthropic doesn't know the future either and could be wrong about this assumption.

Economically, things get more complicated, Amodei says (from 15:56). Even if the models keep improving, rolling them out in companies can stall for "human reasons": change management takes time, procurement processes move slowly, and specific use cases often remain unclear. The key question for whether AI is in a bubble comes down to whether the economy can absorb the technology as fast as it's advancing, she suggests.