As regulators make a push to embrace new technologies, banks must invest in AI and machine learning now, or risk their front office controls being left behind.
Communications surveillance has traditionally been an inefficient and expensive activity for front office controls functions, with analysts required to read or listen to vast quantities of emails and voice calls. While lexicon-based IT systems have been developed to scan communications for specific word patterns and flag potential events, the ratio of false positives generated has remained over 99%, meaning humans still need to filter through piles of alerts to identify which ones warrant their attention.
Machine Learning is finally changing this, with machines now capable of understanding not only individual words in written text but context. This allows banks to gain a broader, faster view across communications at the first pass, resulting in significantly more accurate alerts and freeing up analysts to focus on deeper investigations.
“No longer a hypothetical capability, AI is already transforming financial services surveillance, with the world’s leading banks making active use of the technology,” notes Tim Estes, President and Founder at AI tech provider Digital Reasoning. With regulators asking more of banks and experimenting with AI themselves, being able to demonstrate such comprehensive surveillance coverage will soon become the new standard, he predicts.
The UK’s Financial Conduct Authority (FCA), for example, is taking a lead on exploring how machine-learning might be used to make it easier, faster and cheaper for banks to meet a range of regulatory requirements. In November 2018, it announced that a Digital Regulatory Reporting (DRR) project launched just four months previously had already achieved proof of concept, with machine-reading technology successfully applied to reporting related to capital requirements and mortgage-lending criteria.
FCA’s executive committee member Megan Butler has also called on financial institutions to embrace the application of intelligent technologies such as AI, machine learning, natural language processing and robotics as the next frontier in combating financial crime. This would, for example, help them identify suspicious activity as they are being planned instead of after its occurred, although the large number of legacy IT systems still in use at firms is still a challenge, she notes.
Focusing on people
The potential applications of artificial intelligence to surveillance and other compliance functions are expanding fast. Firms like Digital Reasoning are starting to build AI-powered systems to monitor voice calls, with machines taught to listen to recordings, transcribe problematic ones and highlight in transcripts areas of concern. While challenges remain – for example around teaching machines to recognise and understand different accents – these are being overcome quickly.
Different forms of surveillance are also gradually being integrated to help front office controls functions build a fuller picture. Developers are starting, for example to integrate surveillance of email and voice into a common view, while others are experimenting with integrating trade alerts into that view.
Ultimately, surveillance will evolve from being event-centric to person-centric, with machines’ ability to accumulate knowledge and memory allowing banks to view individuals’ various risk scores, for example related to fraud or market abuse, as one. The capacity to model hundreds or thousands of people – in addition to billions of emails – will transform the efficiency of surveillance, says Estes, who predicts such capabilities could be available to banks within two years.
Fast-forward another two or three years, and AI could allow a unified view of individuals’ risk profiles to be triaged and managed based on business interests, so that a bank’s front office, anti-money laundering financial crime group and human resources team, for example, are each provided with a customised view. At this point, risk will finally have become a truly horizontal function across organisations.
Clock is ticking
To keep up with such advances, banks need to act now. Given that it can take one to two years for organisations to roll out AI-powered surveillance systems when you include training and regulatory approval, “you either have to be acquiring or implementing now,” says Estes. “In four years, I don’t think regulators will accept brittle, rule-based, lexicon-based systems for surveillance. So if you’re not making a jump in the next 18 months, you’re going to be at risk by 2022 or 2023.” While other providers are also marketing Machine Learning capabilities in surveillance, almost none have been audited by regulators with the exception of Digital Reasoning’s multiple Tier 1 deployments.
Numerous investment banks are already using Digital Reasoning’s Conduct Surveillance system, which uses AI to analyse text and audio communications and has a track record of increasing the identification of risks five-fold and slashing false positives by up to 95%. The system is also helping surveillance professionals move towards that person-centric approach that Estes believes will transform conduct risk mitigation. For example, it allows front office functions to build 360-degree profiles of employees, helping banks understand not only their actions but their motives.
To maximise the value of these changes, banks also have to invest in their people alongside their technology. While the use of AI should free up human resources from detection activities, those personnel could be upskilled to amplify an organisation’s remediation and investigation capabilities, Estes advises. Banks will then be able to demonstrate to regulators that they are not only much more efficient at surveillance, they are significantly more effective.