Snap's SnapGen++ generates high-resolution AI images on iPhone in under two seconds
Snap’s SnapGen++ runs server-quality image generation directly on phones. Despite having just 0.4 billion parameters, it beats models 30 times larger.
Snap’s SnapGen++ runs server-quality image generation directly on phones. Despite having just 0.4 billion parameters, it beats models 30 times larger.
Google is rolling out Gemini 3 Pro to power AI Overviews in search. The system now automatically routes complex queries to Google's most powerful language model, while faster models still handle simpler questions, according to Robby Stein, product manager for Google Search.

This intelligent routing already works in AI Mode, Google's AI-powered search chat, and is now expanding to AI Overviews, the quick answers that appear directly below search queries. The feature is available worldwide in English, but only for paying Google AI Pro and Ultra subscribers.
AI Overviews and similar services from other companies have faced criticism for confidently delivering incorrect answers. While source citations create an appearance of trustworthiness, users rarely verify them. More capable models can reduce errors but won't eliminate them.
OpenAI's GPT-5.2 Pro has helped solve another Erdős problem. Neel Somani used the AI model to crack Erdős problem #281 from number theory. Mathematician Terence Tao calls this "perhaps the most unambiguous instance" of an AI solving an open mathematical problem. While earlier proofs may have influenced the model's answer, Tao confirms GPT-5.2 Pro's proof is "rather different".

But Tao warns against a skewed perception of AI capabilities. Negative results rarely get published, while positive results go viral. A new database by Paata Ivanisvili and Mehmet Mars Seven tracks AI attempts at Erdős problems, showing actual success rates of just one to two percent, clustered around easier problems.
Still, AI serves as a useful tool here, even if moderately difficult Erdős problems might remain out of reach, according to Tao. The first autonomous solution to an Erdős problem confirmed by Tao dates back to January 4, 2026.
Elon Musk's power fantasies were already extreme a decade ago. According to OpenAI, Musk wanted to amass $80 billion during the company's founding phase to build a self-sufficient city on Mars. He used this goal to justify why he needed a majority stake in OpenAI.
During discussions about potential succession, Musk also caught other participants off guard by suggesting his children should take control of AGI: AI systems capable of matching or surpassing human intelligence across all domains.
Musk has at least 14 children as of January 2026 and has publicly stated that declining birth rates threaten civilization. He believes that educated or "smart" people should have more children, a view that can be categorized as eugenic and aligned with scientific racism. His desire to pass control of human-like AI to his children fits squarely within this worldview.
Chinese AI startup DeepSeek ran into trouble developing its new flagship model and had to switch to Nvidia chips. According to insiders, Deepseek initially tried using chips from Huawei and other Chinese manufacturers last year, reports the Wall Street Journal. But the results weren't good enough. The company ended up switching to allegedly smuggled Nvidia chips for some training tasks, which finally got things moving. The new model is expected to ship in the coming weeks.
At a recent conference in Beijing, leading Chinese AI researchers admitted that Chinese AI models won't be able to keep pace with US companies without access to better hardware. Justin Lin from Alibaba's Qwen team put the odds of overtaking OpenAI or Anthropic within three to five years at 20 percent at best. Meanwhile, the Chinese government is pushing to cut US chip imports to boost domestic production.
Thousands of pages of evidence in the Musk vs. OpenAI case are now public, and both sides have some explaining to do. One question that stood out to me: can becoming a billionaire ever be a “secondary consideration”?
OpenAI is pushing "Open Responses," an open interface that works with language models from different providers. The project builds on OpenAI's Responses API and lets developers write code once and run it with any AI model.
Currently, Google, Anthropic, and Meta all handle their APIs differently, which means developers have to rewrite code when switching between models. Open Responses tries to fix that with a shared format for requests, responses, streaming, and tool calls. Vercel, Hugging Face, LM Studio, Ollama, and vLLM have already signed on.
Of course, if successful, this move works in OpenAI's favor. If its API becomes the default, competitors would need to adapt to OpenAI's approach, while existing OpenAI customers wouldn't have to change a thing. The "open" label also lets the company signal a spirit of collaboration, even though it's not sharing any technology beyond what's already public.