At our Wealth Ops: Live! event in October we had a fascinating discussion with Nick Eatock and Channelle Pattinson regarding the future impact of AI upon Wealth Operations and Customer servicing, ably facilitated by our very own Emma Norris. During the discussion a question was asked about the ethical value of using AI and how transparent organisations should be in declaring to their customers that some form of non-human intelligence has intervened in a process. As we see more and more practical use cases of AI being used – e.g. in the advice process, in webchats and in helping to understand customer behaviour patterns from client mailings, even speaking with clients in contact centres, those ethical questions around the level of transparency that organisations will need to evaluate become real-world decisions that companies must consider.
Why is the role of AI and the risks associated with the use of it so much more relevant in Financial Services and in particular Financial Advice, than in other industries? An advice process requires a degree of trust and is a long-term relationship (in theory anyway), that demands that two parties are clear on achieving a set of outcomes that can materially impact upon people’s lives and livelihoods. If AI is to play a role, then that role is in a world where trust sits at the heart of the relationships that are formed. These aren’t transactional interactions but are far more expansive and demand greater longevity – if they are to be replaced by AI, then it is paramount that an organisation can build trust through evidence and transparency.
In our research of over 250 Wealth customers, our whitepaper explored some of the feedback from customers in how they are engaged and how their data is used. Overwhelmingly there was little support for the virtues of chatbots and little interest in customer data being used more widely to link life-events or promote targeted selling (much of which already exists today in the form of internet cookies). With a degree of distrust in place, it is reasonable to consider that as adoption of AI capabilities broadens, that the question of transparency becomes that much more important. And what does transparency mean in practice?
- Being able to explain to a customer if and when AI has been used to inform an outcome.
- Being able to explain to a customer how AI has been used, the data and insight it has leveraged and the process it has followed to support that outcome.
- Being able to demonstrate fairness, consistency and appropriateness in how AI has reached a conclusion and whether a human would have arrived at a different outcome. Avoiding any accusation of bias in the AI design is fundamental here.
Then there is the question of time. If AI is adopted to help accelerate outcomes, if not improve them, how do companies ensure that when a customer comes asking for clarification and evidence of the process that has been followed, that this is easily replicable and demonstrable in an efficient and clear way? Providers seeking to offer full transparency must consider the unintended consequences and build in routines, procedures and processes that enable the bonnet to be lifted and the inner workings explained in a customer centric way. This is crucial to building trust. Customers will want to know that AI has been used in a safe and consistent way and providers must be able to prove that on demand.
This all of course assumes that from the outset, the provider in question elects to be transparent around the use of AI at all. There may be a nuance to this and not every provider will reach the same conclusion for every use case. Awareness of potential consumer dissatisfaction or mis-trust is important however, especially in the context of technology innovation. For example, Advisers today are not often asked by their clients to evidence their qualifications to demonstrate competency)and are rarely required to reveal their historic recommendations to highlight that their investment selections deliver the returns expected. Customers usually don’t ask for it, because human interaction often starts from the position of trust and often via a personal recommendation or referral. When a machine is brought into the equation, then for certain customer demographics, the barriers immediately go up and trust becomes a key area of focus. The faceless and senseless appearance of AI is such that it alone can’t defend its decisions and its outcomes and will often require human intervention to justify its usage.
Today we see examples of chatbots and even webchat interactions being presented as avatars, but we also see examples where there is no attempt to disclose that the interaction is with a computer. The conversation often ultimately gives it away; the question is, does it matter? Does a customer care how their question is answered and by who if it’s answered quickly and efficiently? As AI ‘learns’ and builds a repository of activity, history and information it can refer to, it will broaden its reach, become increasingly part of day to day living and that’s all whether that the consumer wants it or not, or even knows about it. Providers must tread a fine ethical line, make decisions about transparency consistently and in the interests of serving one of their most important assets; their customers.
At Simplify Consulting, we are experts in helping providers blend technology capability with a human-first servicing proposition that is seamless, honest, transparent and customer centric. If you need help or guidance on how to integrate technology innovation into your customer journeys, come and talk to us today.
Carl Woodward
Director