Ad
Short

How useful are million-token context windows, really? In a recent interview, Nikolay Savinov from Deepmind explained that when a model is fed many tokens, it has to distribute its attention across all of them. This means focusing more on one part of the context automatically leads to less attention for the rest. To get the best results, Savinov recommends including only the content that is truly relevant to the task.

I'm just talking about-- the current reality is like, if you want to make good use of it right now, then, well, let's be realistic.

Nikolay Savinov

Recent research supports this approach. In practice, this could mean cutting out unnecessary pages from a PDF before sending it to an AI model, even if the system can technically process the entire document at once.

Short

Does saying "please" and "thank you" really help when talking to AI? According to Murray Shanahan, a senior researcher at Google Deepmind, being polite with language models can actually lead to better results. Shanahan says that clear, friendly phrasing—and using words like "please" and "thank you"—can improve the quality of a model's responses, though the effect depends on the specific model and the context.

There's a good scientific reason why that [being polite] might get better performance out of it, though it depends – models are changing all the time. Because if it's role-playing, say, a very smart intern, then it might be a bit more stroppy if not treated politely. It's mimicking what humans would do in that scenario.

Murray Shanahan

Ad
Ad
Ad
Ad
Ad
Ad
Short

Demis Hassabis, founder of AI lab DeepMind, is reportedly "deeply frustrated" with Google's merger and focus on commercializing AI, insiders tell The Information. The merger of Google Brain and DeepMind has not gone smoothly, and tensions remain. Hassabis even considered leaving Google before the AI units were merged into a new lab. He also reportedly told a colleague that it might be difficult for Google to catch up with OpenAI's video AI Sora. Hassabis reportedly made some organizational changes to restore the importance of pure AI research at Google. Hassabis also complained about the exodus of employees to OpenAI and the coverage of these departures. The pressure to show progress in AI may have led to an exaggeration of the capabilities of DeepMind's systems. But Hassabis remains convinced that artificial general intelligence is within reach.

Google News