The Financial Conduct Authority (FCA) has called for considerate use of big data, artificial intelligence and machine learning algorithms in financial services, urging firms to use the technologies in a way that is not detrimental to customers’ trust.
Charles Randell, FCA payment systems regulator, urged firms to “anticipate the fundamental questions” posed by the implementation of new technologies, which he said challenged the concept of informed consent as previously understood by the sector.
He said: “The power of ‘big data’ corporations and their central place in providing services … call into question the adequacy of the traditional liberal approach to the relationship between financial services firms and their customers,” which he said rested on the assumption that consumers will make responsible decisions if given fair disclosure. Unfair contract terms are the subject of a consultation launched by the FCA in May.
He referenced anecdotal examples of insurance comparison websites showing higher premiums for certain minority demographics, as well as firms using behavioural insights to increase prices for customers who do not shop around for better deals.
He also mentioned positive examples of companies leveraging technology to improve financial inclusion, for instance by using transactional data to augment traditional creditworthiness assessments.
He concluded by urging firms to build trust by understanding customers’ “complex and changing” views about what they consider fair use of data by a financial services firms.
“Trust requires good communication so that consumers understand and accept a firm’s approach to using their data,” Randell said.
“By good communication, I don’t mean pages and pages of obscure disclosures, disclaimers and consents. I mean short and readable statements which make it clear what firms will and won’t do with their customers’ data.
“These need to be developed with consumers, not imposed on them. A number of firms do this already but many do not.”