Google's PaLM 2 gets better at math when you let the language model take a breather.

In a paper, researchers have investigated whether language models such as GPT-4 or PaLM 2 are suitable as optimizers that automatically find solutions to predefined problems, such as recommending movies or solving grade school math problems.

The language models tried to find the best prompts for each task on their own. The prompt that PaLM 2-L used particularly well to solve math problems was curious: "Take a deep breath and work on this problem step-by-step." Without taking a deep breath, accuracy dropped by almost ten points.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.