Trending:
Policy & Regulation

28 million UK adults use AI for money management, but 80% don't trust it

Over half of UK adults now use AI tools like ChatGPT for budgeting and investment research, according to Lloyds Banking Group's 2025 Consumer Digital Index. The data reveals a significant trust gap: while adoption is high, 80% worry about accuracy and 83% have privacy concerns. For financial services providers, the challenge isn't building AI tools but earning confidence in high-stakes decisions.

The Numbers

56% of UK adults (28.8 million people) used AI to manage money in the past year, making personal finance the nation's top AI use case, according to Lloyds Banking Group's 2025 Consumer Digital Index. ChatGPT dominates, used by 60% of AI-finance users. One in three use these tools weekly or more for budgeting, investment research, and pension planning.

Users report saving an average £399 annually through AI-generated insights. The use cases are practical: budgeting (top), investment recommendations (37%), future planning including pensions (39%), and debt management (25%).

The Trust Problem

The data reveals a significant gap between usage and confidence. 80% of AI users worry about inaccurate or outdated information. 83% have data privacy concerns. 69% question whether AI can truly personalize advice for their circumstances.

Only 19% of Britons are comfortable receiving AI-generated financial advice, versus 60% who are uncomfortable. Just 15% believe AI services from financial providers act in their best interests.

For comparison: 83% say human support is important when choosing financial providers, and 61% would switch if human support were removed.

What This Means for Financial Services

The pattern is clear. Users embrace AI for operational tasks like fraud detection and 24/7 support, but resist it in advisory roles. Even among 28-40 year-olds (the AI-curious demographic), 23% want to start small and require proof of value before expanding use.

For UK financial institutions, the regulatory environment adds complexity. The FCA's AI Lab and Sprint programme provide testing frameworks, but providers face transaction reporting requirements, approved person obligations, and PRA guidelines on AI implementation in wealth management.

The market is discovering the limits of algorithmic confidence in high-stakes decisions. The challenge for enterprise financial services isn't building AI tools. It's earning trust at scale while meeting regulatory requirements that assume human judgment remains essential.

History suggests adoption of AI in financial advice will remain conditional on human oversight. The question is whether institutions can close the trust gap faster than competitors, or whether regulatory caution will slow innovation industry-wide.