The Proxy World: How AI Will Reshape the Way Ordinary People Interact With Money

No copyright intended.

The Financial Conduct Authority has rarely been accused of understating its ambitions. However, the opening framing of the Mills Review, its newly launched call for input into the long-term impact of AI on retail financial services, is striking in its candour. This, the FCA suggests, may be a genuine inflection point. Not an incremental upgrade. Not a productivity enhancement. A structural transformation in the relationship between ordinary people and their financial lives.

At Arcara Strat, we think that framing is correct, and that the two dimensions of this transformation that deserve the most urgent attention are, first, what happens to consumers when AI begins not just assisting their financial decisions but making them, and second, what happens to the competitive structure of financial services when the entities that control those AI systems may not be banks, insurers, or investment firms at all.

These are not distant hypotheticals. They are trajectories already in motion.

From Assistant to Agent: The Quiet Transfer of Financial Decision-Making

The Mills Review describes a progression that is worth sitting with carefully. Right now, millions of consumers use AI tools to help them understand financial concepts, compare products, or navigate decisions. This is AI as an assistive tool, useful, convenient, broadly benign. The next stage is AI as an advisory system: one that doesn't just explain options but recommends actions. Beyond that lies what the FCA terms the "proxy world",  an future in which consumers delegate financial decisions to autonomous AI agents that act on their behalf, within agreed parameters, without requiring active involvement at each decision point.

The shift sounds gradual. In practice, each stage normalises the next. And the cumulative effect is profound: a quiet but fundamental transfer of financial agency from the individual to the algorithm.

Consider what this means in concrete terms. An AI agent managing your finances could automatically switch your mortgage when a better deal becomes available, rebalance your investment portfolio in response to market conditions, optimise your insurance coverage as your life circumstances change, and manage your day-to-day cash flows to minimise costs and maximise returns. All of this could happen continuously, in the background, without you making a single active decision. The promise is extraordinary, the elimination of financial inertia, the democratisation of sophisticated financial management that was previously available only to the wealthy.

But the FCA is right to ask the question that sits beneath that promise: in this world, can consumers meaningfully assess whether their AI agent is actually acting in their interests? And what happens when it isn't?

The Financial Literacy Paradox

One of the most thought-provoking tensions in the Mills Review is between AI's potential to solve the problem of poor financial literacy and its potential to make that problem permanently worse.

The optimistic case is genuine. AI could, for the first time, give every consumer access to something approaching personalised financial guidance, breaking down complex products, flagging poor value, and navigating decisions that have historically required expensive professional advice. For the millions of people in the UK who are financially underserved  who don't have an IFA, who don't understand their pension, who default to inertia on insurance renewals, this could be transformative.

But the FCA identifies a countervailing risk that is equally serious: if consumers increasingly delegate financial decisions to AI agents, they may never develop the understanding needed to evaluate whether those decisions are good ones. Financial literacy, already concerningly low in the UK, could atrophy further, not because information is unavailable, but because the perceived need to engage with it has been eliminated. You don't learn to cook if a chef is always on call.

This creates a paradox. The very capability that could democratise access to good financial outcomes may simultaneously erode the consumer's ability to recognise bad ones. And in a world where AI agents can be misconfigured, manipulated, or simply wrong, that erosion of understanding is not a minor inconvenience, it is a structural vulnerability. The consumer who has fully delegated their financial life to an AI agent, and who no longer has the literacy to interrogate it, has no meaningful recourse when something goes wrong.

The Mills Review is careful not to resolve this tension. It is asking the right question. The answers will matter enormously for how AI in financial services actually develops, and who benefits from it.

Hyper-Personalisation: Opportunity and Exploitation

AI's capacity for hyper-personalisation is among its most commercially significant capabilities and among its most double-edged. The ability to tailor financial products, pricing, and communications to an individual's specific circumstances, behaviours, and psychological profile offers genuine value. Better-matched products, fairer pricing, and more relevant guidance are all plausible outcomes of personalisation done well.

Personalisation done poorly, however, or deliberately, looks quite different. The same capability that allows an AI system to identify that a consumer would benefit from a lower-cost mortgage also allows it to identify that a consumer is financially anxious, cognitively stretched, or in a moment of vulnerability,  and to calibrate its communication accordingly. The FCA notes explicitly that AI may create new ways for firms to target vulnerable customers. This is not hypothetical. The data signals that indicate vulnerability, irregular spending patterns, late-night financial searches, repeated small borrowing, are precisely the signals that sophisticated AI systems are best at detecting. This becomes increasingly pernicious as consumer data has effectively become a commodity for different types of firms to use in order to feed AI systems that then produce output accordingly after being trained on those specific datasets.

The hyper-personalisation question therefore sits at the intersection of commercial opportunity and consumer harm in a way that is genuinely difficult to navigate. A firm using AI to identify vulnerable customers in order to offer them better support is behaving well. A firm using the same capability to identify vulnerable customers in order to sell them inappropriate products is behaving very badly indeed. The underlying technology is identical. The intent and governance surrounding it are everything, and intent is notoriously difficult to supervise at scale.

Who Will Control Your Financial Life? The Competitive Structure Question

Alongside the consumer dimension, the Mills Review raises a competitive structure question that has received far less public attention than it deserves, and which has profound implications for the UK financial services industry.

The traditional architecture of retail financial services is straightforward: consumers interact with regulated firms, banks, insurers, investment platforms, who provide products and services within a defined regulatory perimeter. The FCA regulates those firms. Accountability is relatively clear.

AI disrupts that architecture in a specific and important way. If consumers increasingly access financial services through AI agents, personal financial proxies that compare products, make recommendations, and execute decisions on their behalf, then the entity that controls the AI agent controls the consumer relationship. And that entity may not be a bank, an insurer, or any other traditionally regulated financial services firm. It may be a technology company. It may be a Big Tech platform. It may be an AI-native intermediary that doesn't fit neatly into any existing regulatory category.

The Mills Review draws an instructive parallel with mobile wallets, where significant value was captured by technology firms who inserted themselves between consumers and financial services providers without becoming regulated providers themselves. The same dynamic, at far greater scale and with far greater consequence, is plausible in an AI-enabled financial services landscape.

Winner Takes Most: The Data Flywheel Problem

The competitive stakes of this shift are heightened by a structural feature of AI systems that the Mills Review identifies clearly: data feedback loops. AI systems improve with data. Firms with more consumer data build better AI models. Better AI models attract more consumers. More consumers generate more data. The loop reinforces itself, and in financial services, where data is simultaneously abundant and deeply personal, it could generate market concentration at a speed and scale that outpaces any regulatory response.

This is the "winner takes most" dynamic that the FCA is rightly concerned about. It does not require deliberate anti-competitive behaviour. It is simply the structural logic of AI at scale. A Big Tech firm that already holds data on hundreds of millions of consumers' spending patterns, search behaviour, and life events is not starting from the same position as a traditional high street bank when it comes to building a financial AI agent. The asymmetry is significant — and it favours incumbency of a very different kind than the incumbency that currently defines financial services.

The question of whether AI ultimately increases competition, by lowering barriers to entry, enabling challengers to scale rapidly, and forcing incumbents to improve,  or decreases it, by entrenching data-rich platforms and creating winner-takes-most dynamics, is genuinely open. The Mills Review does not answer it. Nobody can yet. But the structural forces pulling toward concentration are real, and they deserve to be part of every strategic conversation happening in financial services boardrooms right now.

The Consumer Who Gets Left Behind 

There is a final dimension of the consumer picture that the Mills Review raises and that risks being crowded out by the more technically complex discussions: exclusion. Not every consumer will benefit equally from an AI-enabled financial services landscape. The benefits of AI, personalisation, automation, optimised decision-making, accrue most naturally to consumers who are digitally confident, financially resilient, and comfortable delegating to technology. The risks: manipulation, exploitation, exclusion from AI-mediated products, fall disproportionately on those who are digitally excluded, financially vulnerable, or simply distrustful of technology they don't understand.

The UK already has significant financial exclusion. If the mainstream of financial services migrates toward AI-mediated models, the question of what provision looks like for those outside that mainstream becomes urgent. The Mills Review acknowledges this. It will need to be central to whatever recommendations emerge, because an AI-enabled financial services system that works brilliantly for the digitally confident and financially comfortable, while leaving the vulnerable further behind, has not improved outcomes. It has stratified them.

What This Means Now

The full Mills Review Report will be published in summer 2026. By then, much of the competitive positioning that will define the AI-enabled financial services landscape will already be underway. The firms, platforms, and technology providers that are thinking seriously about these questions now, about consumer agency, about the data flywheel, about who will control the consumer interface, will be better placed than those waiting for regulatory clarity before they act.

At Arcara Strat, we believe the consumer dimension of AI in financial services is the least well understood and the most consequential. The technology questions are fascinating. The regulatory questions are important. However, the question of what happens to ordinary people's relationship with their own financial lives, their agency, their literacy, their vulnerability, their exclusion, is the question that will ultimately determine whether AI in financial services is a force for genuine democratisation or a more sophisticated mechanism for concentrating advantage among those who already possess it.

That question deserves more attention than it is currently getting.


Next
Next

Iran War Update I: Oil at $100, Impact on the UK Economy, Russia Winning, and the Case for Energy Independence.