Google's PaLM 2 gets better at math when you let the language model take a breather.

Ad

In a paper, researchers have investigated whether language models such as GPT-4 or PaLM 2 are suitable as optimizers that automatically find solutions to predefined problems, such as recommending movies or solving grade school math problems.

The language models tried to find the best prompts for each task on their own. The prompt that PaLM 2-L used particularly well to solve math problems was curious: "Take a deep breath and work on this problem step-by-step." Without taking a deep breath, accuracy dropped by almost ten points.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.