Risky business: The case for AI in regulating banking
Most people would argue that when bankers got too clever and businesses too complex, there were consequences. The 2008 financial crisis touched as many people as it did because banks were woven into our lives - leaving societies exposed to risks they didn’t necessarily understand.
Did the US Federal Reserve miss the red flags? Possibly. But hindsight is always 20/20.
However, we’d be naive to miss the warning signs again.
Today, the world has changed; but in many ways, there’s some familiarity - the reliance on large firms is only increasing and one error - in a bank’s system, in a social media site’s codebase or a cybersecurity vulnerability - could cost us years of progress, if not more. The only difference now? There’s an added layer of interdependence and linkages enabled by technology that makes complex systems only as strong as their weakest link.
Credit, liquidity and market risks have been a part of the risk management frameworks of Financial Institutions (FIs) for nearly 20 years now. Nowadays, there are newer, operational risks spilling out of the ‘digital era’, data availability and the pandemic that are keeping the regulators busy.
The pandemic has emphasized our reliance on technology. Black boxes of algorithms trawl through reams of data and dictate information and ads that we see through the day while also determining how much work or pay gig workers to work together. And it’s arguably no better understood than the mysterious credit products that brought the banking system to its knees in 2008.
The world’s most advanced computers now hold information that can shed light on global money movement, economic trends, quality of loan underwriting, customer onboarding decisions, non-compliance with regulation, the state of financial inclusion and much more.
Having the ability to collect, process and learn from large quantities of data gives us clues as to how to mitigate the risks of new technologies themselves.
Financial regulators around the world are keenly exploring AI and its sub-branches of Machine Learning (ML), Natural Language Processing (NLP), and neutral networks; they’re looking to move to ‘regulatory technology’ (or ‘regtech’) to improve compliance.
I’ve tried putting together specific use-cases for incorporating AI in the regulatory process -
Anti-Money Laundering (AML) - AML compliance is expensive. The AML data that law enforcement agencies currently receive are massive reams of billions of data points that are scattered and stored in ways that make it nearly impossible to establish a criminal pattern. Add to this - KYC data of customers also needs to be submitted to regulators to prevent laundering crimes. This has given birth to new regtech firms out to help financial institutions more efficiently comply with Know-Your-Customer (KYC) rules.
UK’s Financial Conduct Authority (FCA) is possibly one of the most forward-thinking regulators, encouraging innovation in the world right now. In 2018 and 2019, the FCA held two international tech sprints aimed at addressing AML challenges which mostly focused on “Privacy-Enhancing Technologies,” or PET’s, of various kinds.
For instance, homomorphic encryption - a technique used to encrypt data throughout the analytical process, so that privacy is preserved. Another, called zero-knowledge proof that enables parties to ask each other yes-or-no questions without sharing underlying details that spurred the inquiry in the first place.
Technology like this, used along with ML, could detect patterns in laundering crimes without compromising privacy or potentially undermining the secrecy of an ongoing investigation.
Fraud prevention - The synthetic identities by bad actors is one of the biggest fraud threats facing FIs. Several studies have shown that effective use of ML in credit decisioning can more easily detect when, for instance, loan applications are submitted by fake entities. Normal detection systems might miss the bad actors but can often be caught by regtech analysis using more data and ML. And AI-powered fintech solutions are shining examples of that.
Consumer protection - There’s no question about technology bridging the wealth gap. Financial inclusion means bringing people without formal identities into the formal banking system. In some parts of the world, the regulatory pressure on banks to manage risk associated with taking on new customers has resulted in whole sectors—and, in some countries, the entire population—being cut off from banking services.
Banks and regulators struggle with how to distinguish high-risk individuals from those who are low risk. At FinBox, our ML models have been rigorously trained on over a billion data points to create more than 5,000 behavioral and financial parameters making it the largest and most sophisticated underwriting engine for new-to-credit (NTC) customer base in India.
The RegTech for Regulators Accelerator (R2A) is a good example of this. The project is currently focused on the Philippines and Mexico, where the aim is to design regulatory infrastructure. Citizens can access the regulator through their cell phones via chatbots to lodge complaints.
Predatory lending & Credit discrimination - I’ve already mentioned how lending can benefit from blended underwriting with the use of new-age data and formal indicators through AI models. In fact, AI is also increasingly being used as a regtech tool to check if the underwriting process complies with fair-lending standards. But a lesser discussed use-case for the technology is in regulation. More often than not, structural bias leads to “disparate impact” outcomes - it’s when “policies, practices, rules or other systems that appear to be neutral result in a disproportionate impact on a protected group.”
Lending policies on the basis of race, gender, or other prohibited factors (not necessarily always because of intent, but because a specific class endured a negative outcome) need to be asserted by regulators. Adversarial AIs could solve this bias problem - regulators would use one AI optimized for an underlying problem - like fraud, credit risk, money laundering - and use another AI optimized to detect bias in the decisions in the first one.
The potential benefits of AI are enormous, but so are the risks.
I’ll leave you with a few things to ponder over -
Russia’s invasion of Ukraine has sparked a whole slew of sanctions against Russian oligarchs who hide their riches in shell companies. FIs are required to screen money movement to identify transactions by sanctioned entities. What if law enforcement agencies like the Financial Crimes Enforcement Network (FinCEN) had AI-powered analytics to find patterns of wrongdoings by sanctioned parties?
Millions of refugees attract human traffickers. Banks are required by law to maintain AML systems to report the movement of money that may indicate human trafficking and other crimes. Existing systems are ineffective. What if AI-powered compliance systems could flag these criminal rings better? Perhaps the human trafficking trade might not be flourishing.
And finally, what if regulators had been able to track the full extent of the relationship between subprime lenders and Wall Street firms like the Lehman Brothers? Armed with real-time data and AI analytics, they could have monitored risk in real-time and across various vantage points. It’s hard to speculate on what’s already transpired but perhaps the enemy isn’t technology but its potential to hide a fine needle in a haystack the size of a planet.
I will see you next week.
Cheers,
Rajat
CEO and co-founder
FinBox