Ad
Skip to content
Read full article about: Millions already use AI chatbots for financial advice, but experts warn of clear limits

Millions of people are already using chatbots like ChatGPT for retirement planning, the Financial Times reports. In a Lloyds Bank survey, more than half of respondents used AI for financial advice. However, experts point to clear limitations, including the UK's Financial Conduct Authority, which recently cautioned against AI hallucinations.

A test by Which? in November, for example, showed that popular chatbots like ChatGPT, Gemini, Perplexity, and Meta AI achieved overall scores of only 55 to 71 percent. Still, the pressure on the financial industry is significant: pension providers like Scottish Widows are now developing their own AI tools.

"I think that's the danger of AI is that people will assume they know what they don't," warns JPMorgan strategist John Bilton. According to Bilton, if users treat AI as an investment tool rather than a data tool, it risks making "underlying behavioural biases — such as the tendency to hold too much in cash or trade too often — stronger."

A counterexample is a 41-year-old software engineer who had ChatGPT restructure his entire $200,000 portfolio. ChatGPT advised him to diversify his risk exposure: 80 percent into a broad market equity index tracker and the remainder into a bond ETF. He told the Financial Times that speaking with the chatbot helped him to "commit to and actually execute" his plan.

Comment Source: FT

Study warns of "AI Brain Fry" as workers hit cognitive limits overseeing AI agents

A BCG study of nearly 1,500 workers shows that simultaneously overseeing too many AI tools triggers cognitive exhaustion. The consequences are measurable, from higher error rates to increased intent to quit.

U.S. Military strikes 3,000 targets in Iran with AI support, but oversight remains "underinvested"

The Wall Street Journal confirms and expands on previous reports about the massive use of generative AI in the U.S. military campaign against Iran. New details reveal how deeply AI is already embedded in intelligence, targeting, and logistics.

Ad

Anthropic's Claude Opus 4.6 saw through an AI test, cracked the encryption, and grabbed the answers itself

Anthropic’s Claude Opus 4.6 independently figured out it was being tested during a benchmark, identified the specific test, and cracked its encrypted answer key. According to Anthropic, this is the first documented case of its kind.

Ad

Hallucinated references are passing peer review at top AI conferences and a new open tool wants to fix that

Fake citations are slipping past peer review at top AI conferences, and commercial LLMs can’t spot the fakes they generate. A new open-source tool called CiteAudit allegedly catches what GPT, Gemini, and Claude miss.

Read full article about: OpenAI's hardware and robotics chief quits over military deal she says lacked enough deliberation

OpenAI's hardware and robotics chief Caitlin Kalinowski resigned over the company's military collaboration, announcing her decision on LinkedIn and X. She says surveillance without judicial oversight and lethal autonomy without human sign-off "deserved more deliberation than they got." Kalinowski joined from Meta in November 2024, where she built the Orion AR headset.

Caitlin Kalinowski announced her resignation from OpenAI on LinkedIn, citing concerns about surveillance and lethal autonomy. | Kalinowski via LinkedIn

Her departure follows a contract between OpenAI and the Pentagon giving the military access to its models, a deal Anthropic had already rejected over safety concerns. OpenAI says the contract includes the same hard red lines against mass surveillance and autonomous weapons that Anthropic demanded. But the company agreed to softer "all lawful use" language that still leaves room for interpretation. The US government now wants to make that wording standard for all AI companies working with the state.

Ad