Ad
Short

When does it make sense to use AI agents instead of just prompts? Not very often, according to a new analysis from Anthropic. The AI company draws a clear line between workflows (where code orchestrates AI models and tools) and agents (which control their own processes independently).

While agents can handle complex, open-ended tasks, they also need more oversight and computing power. Anthropic suggests starting with basic prompts and only adding complexity when absolutely necessary. Though development frameworks can help teams get started with agents, simpler approaches often work better in production environments.

For teams considering AI agents, Anthropic recommends focusing on three key areas: keeping designs simple, making processes transparent, and carefully crafting user interfaces. The company sees the most potential for agents in customer service and software development tasks.

Ad
Short

The definition of artificial general intelligence (AGI) might come down to dollars and cents. The Information reports that Microsoft and OpenAI's agreement defines AGI as AI systems that can outperform humans at most commercially valuable work, with one key addition: these systems need to generate maximum returns of around $100 billion or more for early investors.

That profit target could be a long way off. OpenAI is currently operating at a loss and doesn't expect to turn its first annual profit until 2029. This means reaching the $100 billion threshold could take many years.

Beyond the AGI definition itself, a source told The Information that Microsoft will get access to any technology OpenAI develops by 2030, whether it's classified as AGI or not. And with OpenAI's planned shift to become a for-profit company, both companies are reportedly considering removing the AGI clause altogether.

Ad
Google News