Balancing Innovation and Risk: Current and Future Use of LLMs in the Financial Industry

By Uday Kamath, Chief Analytics Officer at Smarsh

Large language models (LLMs) have revolutionized how we interact with clients, partners, our teams, and technology within the finance industry. According to Gartner, the adoption of AI by finance functions has increased significantly in the past year, with 58 percent using the technology in 2024 – a rise of 21 percentage points from 2023. While 42 percent of finance functions do not currently use AI, half are planning implementation.

Although great in theory, these financial organizations must exercise an abundance of caution when using AI, usually due to regulatory requirements they must uphold – like the EU’s Artificial Intelligence Act. In addition, there are inherent issues and ethical problems surrounding LLMs that the financial industry must address.

Addressing Common LLM Hurdles

In 2023, almost 40 percent of financial services experts listed data issues – such as privacy, sovereignty, and disparate locations – as the main challenge in achieving their company’s AI goals. This privacy issue within LLMs is particularly important to the financial sector due to the sensitive nature of its customers’ data and the risks of mishandling it, in addition to the regulatory and compliance landscape.

However, robust privacy measures can allow financial institutions to leverage AI responsibly while minimizing risk to their customers and reputations. For companies that rely on AI models, a common resolution is to adopt LLMs that are transparent about their training data (pertaining and fine-tuning) and open about the process and parameters. This is only part of the solution; privacy-preserving techniques, when employed in the context of LLMs, can further ensure AI responsibility.

Hallucinations, when an LLM produces incorrect, sometimes unrelated, or entirely fabricated information but appear as legitimate outputs, is another issue. One of the reasons this happens is because AI generates responses based on patterns in its training data rather than genuinely understanding the topic. Contributing factors include knowledge deficiencies, training data biases and generation strategy risks. Hallucinations are a massive issue in the finance industry, which places high value on accuracy, compliance and trust.

Although hallucinations will always be an inherent characteristic of LLMs, they can be mitigated. Helpful practices include, during pre-training, manually refining data using filtering techniques or fine-tuning by curating training data. However, mitigation during inference, which occurs during deployment or real-time use, is the most practical solution due to how it can be controlled and its cost savings.

Lastly, bias is a critical issue in the financial space as it can lead to unfair, discriminatory, or unethical outcomes. AI bias refers to the unequal treatment or outcomes among different social groups perpetuated by the tool. These biases exist in the data and, therefore, occur in the language model. In LLMs, bias is caused by data selection, creator demographics, and a language or cultural skew. It’s imperative that the data the LLM is trained on is filtered and suppresses topics that are not consistent representations. Augmenting and filtering this data is one of the several techniques that can help mitigate bias issues.

What’s Next for the Financial Sector?

Instead of utilizing very large-sized language models, AI experts are moving toward training smaller, domain-specific models that are more cost-effective for organizations and are easier to deploy. Domain-specific language models can be built explicitly for the finance industry by finely tuning with domain-specific data and terminology.

These models are ideal for complex and regulated professions, like financial analysis, where precision is essential. For example, BloombergGPT is trained on extensive financial data – like news articles, financial reports, and Bloomberg’s proprietary data – to enhance tasks such as risk management and financial analysis. Since these domain-specific language models are trained on this topic purposely, it will most likely reduce errors and hallucinations that general-purpose models may create when faced with specialized content.

As AI continues to grow and integrate into the financial industry, the role of LLMs has become increasingly significant. While LLMs offer immense opportunities, business leaders must recognize and mitigate the associated risks to ensure LLMs can achieve their full potential in finance.

Uday Kamath is Chief Analytics Officer at Smarsh, an SaaS company headquartered in Portland, OR, that provides archiving and has compliance, supervision and e-discovery tools for companies in highly regulated industries,