Ad
Short

Wan2.2 A14B now tops the rankings for open source video models, according to Artificial Analysis. It ranks seventh for text-to-video and fourteenth for image-to-video, with the lower placement in the latter likely due to its 16 frames per second output compared to 24 fps in some competitors. Among open models, Wan2.2 A14B leads the field, but it still trails behind closed models like Veo 3 and Seedance 1.0 in overall performance. Pricing, however, is often much lower depending on the provider.

Image: Artificial Analysis
Ad
Ad
Short

Eighty-one-year-old psychologist Harvey Lieberman describes ChatGPT as "not a crutch, but a cognitive prosthesis — an active extension of my thinking process."

In a recent New York Times essay, Lieberman explains how an experiment with ChatGPT turned into a daily routine. He treats the AI as a reliable thinking partner, using it to sharpen his language, deepen self-reflection, and even spark emotional resonance. At a stage in life when thoughts can slow down, Lieberman says ChatGPT has helped him "re-encounter my own voice."

"ChatGPT may not understand, but it made understanding possible."

Dr. Harvey Lieberman

ChatGPT has also drawn criticism for reinforcing users' beliefs and, in some cases, steering vulnerable or mentally ill people into negative thought patterns. OpenAI has acknowledged these risks.

Short

Cohere's new Command A Vision model is designed to handle images, diagrams, PDFs, and other types of visual data. Cohere says the model outperforms GPT-4.1, Llama 4 Maverick, Pixtral Large, and Mistral Medium 3 on standard vision benchmarks.

The model's OCR can recognize both the text and the structure of documents such as invoices and forms, outputting the extracted data in structured JSON. Command A Vision can also process real-world images, like identifying potential risks in industrial environments, the company says.

Image: Cohere

Command A Vision is available through the Cohere platform and for research on Hugging Face. The model can run locally with either two A100 GPUs or a single H100 using 4-bit quantization.

Ad
Ad
Ad
Ad
Short

Google is expanding its "AI Mode" to the UK, following launches in the US and India. The new feature appears as an extra tab in Google Search and in the Google app for Android and iOS, letting users ask complex questions via text, voice, or image and get AI-generated answers with additional links. AI Mode runs on a customized version of Gemini 2.5 and uses query fan-out techniques to break down questions into smaller subtopics, searching them all at once.

Google says AI Mode should result in a "greater diversity" of visited websites—a phrase that sidesteps the more important effect: less traffic flowing to the open web. Even the "light" version, called AI Overviews, has already led to a sharp drop in web clicks.

Google News