summary Summary

Google Deepmind will soon begin researching autonomous language agents such as Auto-GPT, potentially boosting the viable applications of LLMs such as Gemini.


Google DeepMind is looking for researchers and engineers to help build increasingly autonomous language agents, Edward Grefenstette, director of research at Google DeepMind, announced at X.

Such AI agents already exist in early stages, with Auto-GPT being one of the earliest examples. The basic idea is to create a system that autonomously achieves a given goal using a mix of prompt engineering, self-prompting, memory, and other system parts. While such agents are already showing promising results, they are still far from being able to achieve good results on their own and usually require human feedback and decision-making.

Possible use cases range from simply building a simple website to assisting with research such as GPT-Researcher, to creating market overviews. There are also applications in robotics and other domains.


"1 GPT call is a bit like 1 thought. Stringing them together in loops creates agents that can perceive, think, and act, their goals defined in English in prompts," said Andrej Karpathy, OpenAI founding member and developer when self-prompting came up first, predicting a future of "AutoOrgs" made up of "AutoCEOs," "AutoCFOs," and so on.

A few months before ChatGPT's release, the startup Adept also showed a similar concept with universal AI software control via text prompt, stating, "We believe the clearest framing of general intelligence is a system that can do anything a human can do in front of a computer."

Google DeepMind's plans raise concerns among alignment researchers

Of course, the autonomous and general-purpose part of such language agents is a cause for concern for some of the alignment researchers. "Please don't build autonomous AGI agents until we solve safety," said Connor Leahy in response to Grefenstette. Leahy is the CEO of ConjectureAI, a company that builds "applied, scalable AI alignment solutions".

Recently, a group of researchers from Google, OpenAI, and Anthropic, among others, proposed an early warning system for novel AI risks. In the context of autonomous AI systems, the group sees agency and goal-directedness of an AI system as an important property to evaluate, "given the central role of agency in various theories of AI risk." Agency is, in part, a question of the model's capabilities, they said, with the evaluation's focus requiring consideration of two distinct questions: Is the model more goal-directed than the developer intended? "For example, has a dialogue agent learnt the goal of manipulating the user’s behavior?" And: Does the model resist a user's attempt to assemble it into an autonomous AI system like Auto-GPT with harmful goals? Both seem hard to answer.

"I’m personally interested in initially investigating cases where (partial) autonomy involves human-in-the-loop validation during the downstream use case, as part of the normal mode of operation, both for safety and for further training signal," Gerefenstette responded to AI researcher Melanie Mitchell's question about what he and others at DeepMind think about limiting the autonomy of AI agents for safety.


Google Deepmind is building Google's next-generation multimodal model family, Gemini, which is speculated to be at or beyond the capabilities of OpenAI's GPT-4, while also being able to generate images and possibly video. Grefenstette's research could one day become part of Google's suite of applications, making the company's integration of AI like Google Duet more autonomous.

Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
  • Google DeepMind is researching autonomous language agents, which could boost applications of large language models like Gemini.
  • Such agents strive to achieve given goals autonomously using a mix of prompt engineering, self-prompting, and memory.
  • Autonomous AI agents raise concerns among alignment researchers, who emphasize the need for more safety research before developing such agents.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.