Ad
Short

LLMs (Large Language Models) like OpenAI's GPT-4 act as repositories for millions of vector programs mined from human-generated data learned as a by-product of language compression, says AI researcher François Chollet. Prompt engineering then involves searching for the right "program key" and "program argument(s)" to accomplish a given task more accurately. Chollet expects that as LLMs evolve, prompt engineering will remain critical, but can be automated for a seamless user experience. This is in line with recent ideas from labs such as Deepmind, which is exploring automated prompt engineering.

Ad
Ad
Ad
Ad
Short

Researchers at Cornell University have developed a tiny quadruped robot powered by combustion actuators fueled by methane and oxygen. The insect-sized robot, published in Science, can jump 59 centimeters straight up and walk while carrying 22 times its own weight. The researchers aim to apply the power generated by the combustion actuators to large-scale, variable-recruitment musculature for stronger, more agile robots. "Putting thousands of these actuators in bundles over a rigid endoskeleton could allow for dexterous and fast land-based hybrid robots," said one researcher.

Ad
Ad
Short

Google's PaLM 2 gets better at math when you let the language model take a breather.

In a paper, researchers have investigated whether language models such as GPT-4 or PaLM 2 are suitable as optimizers that automatically find solutions to predefined problems, such as recommending movies or solving grade school math problems.

The language models tried to find the best prompts for each task on their own. The prompt that PaLM 2-L used particularly well to solve math problems was curious: "Take a deep breath and work on this problem step-by-step." Without taking a deep breath, accuracy dropped by almost ten points.

Google News