summary Summary

The financial industry will drive the use of large-scale language models, according to a study by the Alan Turing Institute. The study was conducted in collaboration with HSBC, Accenture, and the UK's Financial Conduct Authority (FCA).


The financial industry, which has historically been quick to adopt new technologies, has already begun to use language models to support various internal processes. It is actively evaluating potential uses for market-facing activities in advisory and trading, according to the authors. For example, a survey by UK Finance found that more than 70 percent of participating financial institutions will be in the proof-of-concept phase for generative AI solutions by 2023.

In addition, specialized models already exist: Bloomberg, for example, has developed BloombergGPT, a model with 50 billion parameters that can be used for a range of financial tasks, such as news analysis and answering questions. In tests, however, the model was beaten by GPT-4. FinGPT is another example of a specialized financial language model.

Within the next two years, experts expect language models to be integrated into external financial services such as investment banking and venture capital strategy development.


Experts predict widespread use of generative AI in finance

In addition to a literature review, the researchers conducted a workshop with 43 experts from major commercial and investment banks, regulators, insurance companies, payment service providers, government agencies, and the legal profession.

The majority of workshop participants (52%) already use language models to improve performance in information-oriented tasks, from managing meeting notes to gaining insight into cybersecurity and compliance. 29% use them to improve critical thinking, and another 16% use them to break down complex tasks.

The industry is also already deploying systems that boost productivity by quickly analyzing large volumes of text to simplify decision-making and risk profiling, and to improve investment research and back-office processes, the study found.

When asked about the future of generative AI in the financial sector, respondents said they expect the systems to be integrated into services such as investment banking and venture capital strategy development within the next two years.

They also thought it likely that language models would be integrated to improve human-machine interaction, such as dictation, and that embedded AI assistants could reduce the complexity of knowledge-intensive tasks such as regulatory review.


Privacy concerns top the list

However, the study also identifies risks. Privacy concerns top the list, with nearly half of respondents expressing concern about the privacy risks associated with speech recognition systems, particularly in terms of potential data leakage. In addition, concerns were raised about the accuracy of the text generated and the risk of automation errors, with the increasing reliance on language models potentially having a negative impact on human judgment and control.

The study concludes with recommendations for developing an industry-wide analysis of language model evaluations and exploring the possibilities of open-source models to ensure trusted and secure integration.

Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
  • Large Language Models (LLMs) could revolutionize the financial sector within two years by detecting fraud, generating financial information, and automating customer service, according to a study by the Alan Turing Institute.
  • Already, 52% of financial professionals surveyed are using LLMs to improve performance on information-oriented tasks, while 29% are using them to improve critical thinking skills and 16% are using them to decompose complex tasks.
  • The study recommends that financial service providers, regulators, and policymakers work together across sectors to share and develop knowledge on the implementation and use of LLMs, particularly regarding security concerns.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.