FCA working on using data to predict consumer harm
The City regulator is working on leveraging data to create new tools that will allow it to detect consumer harm and intervene more quickly, amid its growing scrutiny of high-risk investments.
Speaking at the CDO Exchange for Financial Services, Jessica Rusu, chief data, information and intelligence officer at the Financial Conduct Authority (FCA), said that the threat landscape has changed for consumers and the regulator is responding to this.
Read more: FCA to publish policy statement on high-risk investment rules next year
She pointed to FCA research, which coincided with the regulator’s £11m campaign to highlight the pitfalls of high-risk investments, and found that 76 per cent of consumers who have invested in high-risk products such as cryptocurrency are driven by competition with friends, family and acquaintances and their own past investments.
Rusu said that fraudsters benefit from new technology, for example, cryptocurrencies continue to be high risk for consumers, with many not understanding the risks involved, are highly volatile for markets, and highly likely to be used in financial crime.
She said the FCA is making the most of the data and intelligence it collects to anticipate and predict harm.
Read more: P2P investors urge FCA to hire “very different” kind of chair
“We have to make the best use of our own resources – connecting the dots in terms of intelligence across the organisation, drawing on strategies and approaches from data science, and leveraging data to create new tools and techniques which allow us to detect harm and intervene more quickly,” said Rusu.
“Connecting these different data sets is helping form new intelligence and prioritise risks. For example, we are identifying financial advisers most likely to give poor advice by tracking the outcomes from previous supervisory activity.
“This is something we’ll will continue to build on in the months ahead. The digital unified intelligence environment we’re developing will mean we are even better equipped to anticipate harm and protect consumers.”
Rusu said the FCA aims to define and codify what it calls ‘paths to harm’ – the behaviours and events which it knows are likely to end in consumer harm or markets failing to function, to identify and intervene earlier.
Read more: FCA hints at clamp down on sophisticated and HNW investor exemptions
“For example, we know the conditions that might give firms an incentive to pursue aggressive selling practices,” she said.
“And we know the tactics a fraudulent seller might adopt before going on to set up a fraudulent investment programme.
“When we have the data in a shape which will let us see when those conditions are being met, we can intervene earlier – before consumers and markets suffer. It will mean a greater focus on prevention than ever before…
“It represents a cutting-edge approach to collecting, storing, and analysing data. It’s a modular technology architecture which will support all the elements of an end-to-end intelligence lifecycle – from data collections products at one end to algorithms which trigger interventions at the other.”
Rusu said machine learning and artificial intelligence will only become more important to the FCA’s work in the coming months and years.
She said the regulator wants to facilitate the broader debate on the risks and ethical questions associated with these tools, as well as exploiting their potential.