Short

How useful are million-token context windows, really? In a recent interview, Nikolay Savinov from Deepmind explained that when a model is fed many tokens, it has to distribute its attention across all of them. This means focusing more on one part of the context automatically leads to less attention for the rest. To get the best results, Savinov recommends including only the content that is truly relevant to the task.

I'm just talking about-- the current reality is like, if you want to make good use of it right now, then, well, let's be realistic.

Nikolay Savinov

Recent research supports this approach. In practice, this could mean cutting out unnecessary pages from a PDF before sending it to an AI model, even if the system can technically process the entire document at once.

Short

Does saying "please" and "thank you" really help when talking to AI? According to Murray Shanahan, a senior researcher at Google Deepmind, being polite with language models can actually lead to better results. Shanahan says that clear, friendly phrasing—and using words like "please" and "thank you"—can improve the quality of a model's responses, though the effect depends on the specific model and the context.

There's a good scientific reason why that [being polite] might get better performance out of it, though it depends – models are changing all the time. Because if it's role-playing, say, a very smart intern, then it might be a bit more stroppy if not treated politely. It's mimicking what humans would do in that scenario.

Murray Shanahan

Ad
Ad
Short

Anthropic employees are about to get rich. The company is giving current and former staffers who have been with the company for at least two years a chance to cash out up to 20 percent of their shares—capped at $2 million per person. Anthropic will buy back the shares at its latest $61.5 billion valuation, matching the price from its March funding round. The buyback, worth several hundred million dollars in total, is expected to wrap up by the end of the month, according to The Information. Anthropic, founded by former OpenAI researchers, now has more than 800 employees.

Short

A federal judge in San Francisco is questioning whether Meta can use copyrighted books to train its AI models without getting permission from authors. The case centers on Meta's Llama model, which was trained on works including those by Sarah Silverman. Meta argues its use of the material falls under "fair use," while the plaintiffs say it amounts to copyright infringement. During a recent hearing, Judge Vince Chhabria acknowledged that using copyrighted data for AI could be considered transformative, but said that doesn't necessarily make it fair. He pointed out that the resulting technology could undermine the entire market for original works.

You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products. You are dramatically changing, you might even say obliterating, the market for that person's work, and you're saying that you don't even have to pay a license to that person.

U.S. District Judge Vince Chhabria

Short

Google is rolling out AdSense ads directly inside AI chatbot conversations. The company has expanded its AdSense for Search platform to support chatbots from startups like iAsk and Liner, according to Bloomberg. The move comes as Google looks for new ways to maintain its ad revenue amid a potential drop in traditional search queries, reflecting broader changes in how people find information online. AI search services like OpenAI and Perplexity—along with Google's own Gemini tools—increasingly deliver direct answers to users' questions, sometimes eliminating the need to visit external websites, which has long been a foundation of Google's business model. Bloomberg reports that Google began testing this new chatbot ad format in 2024.

Ad
Google News