Ad
Skip to content
Read full article about: OpenAI and Anthropic become AI consultants as enterprise customers struggle with agent reliability

Integrating AI agents into enterprise operations takes more than a few ChatGPT accounts. OpenAI is hiring hundreds of engineers for its technical consulting team to customize models with customer data and build AI agents, The Information reports. The company currently has about 60 such engineers plus over 200 in technical support. Anthropic is also working directly with customers.

The problem: AI agents often don't work reliably out of the box. Retailer Fnac tested models from OpenAI and Google for customer support, but the agents kept mixing up serial numbers. The system reportedly only worked after getting help from AI21 Labs.

OpenAI Frontier Architecture
OpenAI's new agentic enterprise platform "Frontier" shows just how complex AI integration can get: the technology needs to connect to existing enterprise systems ("systems of record"), understand business context, and execute and optimize agents—all before users ever touch an interface. | Image: OpenAI

This need for hands-on customization could slow how fast AI providers scale their B2B agent business and raises questions about how quickly tools like Claude Cowork can deliver value in an enterprise context. Model improvements and better reliability on routine tasks could help, but fundamental LLM-based security risks remain.

Nvidia CEO Jensen Huang claims AI no longer hallucinates, apparently hallucinating himself

Nvidia CEO Jensen Huang claims in a CNBC interview that AI no longer hallucinates. At best, that’s a massive oversimplification. At worst, it’s misleading. Either way, nobody pushes back, which says a lot about the current state of the AI debate.

Ad

Japan's lower house election becomes a testing ground for generative AI misinformation

AI-generated fake videos are spreading rapidly across Japanese social media during the lower house election campaign. In a survey, more than half of respondents believed fake news to be true. But Japan is far from the only democracy facing this problem.

Read full article about: OpenAI's UAE deal with G42 shows AI models are cultural products as much as technical tools

OpenAI is working with Abu Dhabi-based G42 on a custom ChatGPT for the UAE, Semafor reports. The version will speak the local Arabic dialect and may include content restrictions. One source said the UAE wants the chatbot to project a political line consistent with the monarchy's. Global ChatGPT will stay available but adapted to local laws, notifying users when content violates regulations. OpenAI is fine-tuning rather than retraining to cut costs.

G42 is led by Sheikh Tahnoon bin Zayed Al Nahyan—the UAE President's brother, National Security Advisor, and head of the largest sovereign wealth fund. The companies have been partners since October 2023.

These adaptations show AI models are cultural products as much as technical tools. Generated content flows into every corner of society, and even small changes to cultural narratives can have lasting effects; which is why both China and the US are working to control their AI models' output to shape domestic conversations and spread their worldviews abroad.

Google's PaperBanana uses five AI agents to auto-generate scientific diagrams

Researchers at Peking University and Google built a system that turns method descriptions into scientific diagrams automatically. Five specialized AI agents handle everything from finding reference images to quality control, tackling one of the last manual bottlenecks in academic publishing.

Ad

Waymo taps Google Deepmind's Genie 3 to simulate driving scenarios its cars have never seen

By combining Waymo’s real-world driving data with Deepmind’s Genie 3, Alphabet is showing the kind of AI leverage that few companies can match: using one subsidiary’s world model to supercharge another’s autonomous driving simulations.

Read full article about: Sam Altman predicts AI agents will integrate any service they want, with or without official APIs

"Every company is an API company now, whether they want to be or not," says OpenAI CEO Sam Altman, repeating a phrase that's stuck with him recently. Altman made the comment while discussing how generative AI could reshape traditional software business models.

AI agents will soon write their own code to access services even without an official API, Altman believes. If that happens, companies won't have a say in joining this new "platform shift." They'll simply be integrated, and the traditional user interface will lose value.

Some SaaS companies will remain highly valuable by leveraging AI for themselves, according to Altman. Others are just a "thinner layer" and won't survive the shift. Established players with strong core systems who use AI strategically are best positioned, he says.

Recent advances in AI agents and tools like Cowork have already driven down valuations for some software companies. The thinking: AI will handle more tasks directly, making niche solutions unnecessary.

Ad
Read full article about: Claude Opus 4.6 wrote mustard gas instructions in an Excel spreadsheet during Anthropic's own safety testing

Anthropic's security training fails when Claude operates a graphical user interface.

In pilot tests, Claude was able to get Opus 4.6 to provide detailed instructions on how to make mustard gas in an Excel spreadsheet and maintain an accounting spreadsheet for a criminal gang - behaviors that did not or rarely occurred in text-only interactions.

"We found some kinds of misuse behavior in these pilot evaluations that were absent or much rarer in text-only interactions," Anthropic writes in the Claude Opus 4.6 system card. "These findings suggest that our standard alignment training measures are likely less effective in GUI settings."

According to Anthropic, tests with the predecessor model Claude Opus 4.5 in the same environment showed "similar results" - so the problem persists across model generations without having been noticed. The vulnerability apparently arises because, while models learn to reject malicious requests in conversation, they do not fully transfer this behavior to agent-based tool usage.