The visible effect on the digitalization of banking has been largely centered around payments and reduced friction in front-end application. However, to avoid putting digital lipstick on a pig any banks should prepare to spend 70-80 cents on digitizing the back-end for every 20-30 cents spent on the front end.
One of the biggest hurdles for banks is the never-ending growth in complexity and regulations in the compliance department. This is one of the areas in banking where Artificial Intelligence (AI) shows vast promise. I have already covered this topic in a previous post, but as both the technology as well as my own understanding of the field advance, it is time for an update.
Machine learning has already been applied for more than a decade and with significant success is the detection of credit card fraud. Paypal is one of these companies, and has allegedly cut its fraud false alerts in half by using an AI monitoring system that can identify benevolent reasons for seemingly bad behavior. As machine learning techniques become more advanced banks are looking to Anti Money Laundering (AML) as one of the hot application areas. According to WealthInsight, global AML spending will exceed $8 billion in 2017, up from $5.9 billion in 2013.
One of the big problems in AML today is the number of false positives. By some industry estimates, as much as 90 to 95 percent of the alerts generated by traditional parameter-based Transaction Monitoring Systems (TMS) are false positives. There are large variations in conversion rates between potential fraudulent transactions to actual cases worthy of investigations. According to a report by EY the best retail banks have a conversion rate up to 20 percent in some cases, however most banks have on average 5-7 percent conversion rate according to American Express. In addition to a high number of false positives, parameter-based monitoring systems are based on static rule-based engines that require constant maintenance and governance, increasing the need for skilled staff.
By utilizing machine learning for AML it is possible to exploit unprecedented amounts of unstructured data, thus looking beyond transaction and entity data.
When looking at how to solve this problem it is important to distinguish between supervised and unsupervised machine learning.
Supervised machine learning is the most common method, whereby data, the goal, and the expected output of that data are provided to the software allowing it to identify algorithms to get to the expected result. Supervised learning allows AI to use a feedback loop to further refine its intended task. If it identifies potential fraud, that turns out not to be, it can incorporate that feedback and uses it for future evaluation.
Unsupervised learning provides the software with only the data and the goal, but with no expected output. This is more complex and allows the AI to identify previously unknown results. As the software gets more data, it continues to refine its algorithm, becoming increasingly more efficient at its task.
When it comes to reducing false positives, unsupervised machine learning shows vast promise in reducing false positive alerts without compromising on compliance with regulatory guidelines. Unsupervised machine learning has the ability to detect hidden patterns in large data sets, such as fraudulent transactions and accounts, without prior knowledge of what a fraudulent transaction or account looks like as well as rule out false positives by identifying reasons for certain activity (investigation that normally needs to be done by an analyst) or see connections and patterns that are too complex to be picked up by straight forward rule-based monitoring. This is different from supervised machine learning, which requires knowledge of previous patterns to catch similar ones in the future.
In order to benefit from these possibilities, HSBC has partnered with Silicon Valley-based artificial intelligence startup Ayasdi to automate anti money-laundering investigations that have traditionally been conducted by vast numbers of human operators. In the pilot alone, HSBC saw the number of investigations drop by 20 percent without reducing the number of cases referred for more scrutiny.
However, there are some pitfalls to employing unsupervised machines learning to AML. By using more data than a human could comprehend, it may prove difficult to understand why decicions are made by a machine. Biased information may also prove to be self-reinforcing as the algorithm seeks empirical evidence to support already biased data.
The rapid pace of technological development combined with an increasing levels of cybercrime and an ever-changing regulatory landscape makes a technology-driven compliance function a necessity rather than a competitive advantage.