RBI proposes regulation of underwriting algorithms.
But where does the responsibility truly lie?
According to Elon Musk, AI will overtake humans in just five years. While this prediction may or may not come true, in the meantime, AI has made its way into all kinds of decision making that has major impacts on both businesses and individuals alike.
Take for instance, new-age lending apps that rely on their proprietary AI-powered underwriting algorithms to assess new-to-credit borrowers and measure their repayment capability. They access reams of data - often from the borrower’s device - in order to decide how much to lend and at what terms. This was a canonical step towards increasing credit penetration in India, allowing banks and NBFCs to open doors to thin-file borrowers, considered riskier for lack of formal documentation and credit histories.
And so, there has been a four-fold surge in loans disbursed digitally by commerical banks since FY17. But, the NPAs continue to rise as well, remaining comparable to traditional, non-digital loans.
Can you trust your AI?
Now, the RBI has sat up and taken notice. The banking regulator has proposed to regulate these algorithms, to ensure they’re fair and less risky in nature.
It doesn’t seem like an unfair ask by any means. It’s only complicated by the fact that these algorithms represent confidential company assets developed with proprietary tech that lenders may not want to disclose publicly.
It’s this ‘cloak and dagger’ approach that is the problem. There’s a severe lack of transparency when it comes to AI and its most popular application, Machine Learning (ML). When underwriting a potential borrower, there’s little visibility into when, where, and how a decision was made. So when there’s such secrecy around how the algorithm functions, how is anyone - be it a borrower or even the lender - meant to trust the outcome?
The answer lies not in a regulatory crackdown, but in working to make these algorithms traceable and auditable. In other words, tech companies need to now invest significant resources in building what is known as ‘responsible AI’ that ensures explainable decisions based on unbiased data. This has two major aspects:
Interpretability: The algorithm should produce results that can be easily understood by every stakeholder involved, especially the borrower.
Traceability: Each stage involved in the decision-making process must be documented in a way that can be traced back to the very first step.
Getting AI right is more important than getting the right AI
But perhaps we need to take a step back. As those involved in the business of lending we must ask ourselves - is AI the solution for the problem we’re looking to solve? At which stages of the underwriting process is it truly necessary? Are there any inevitable yet unintended consequences and are they worth it in context of the greater good?
Taken from: https://www.brookings.edu/research/reducing-bias-in-ai-based-financial-services/
At FinBox, we’ve tackled these tough questions to put together a robust risk assessment engine that makes use of AI and ML only where it’s absolutely needed, in order to produce the most accurate and unbiased decisioning.
The AI and ML-driven underwriting suite is consistently updated with new data in order to guarantee optimal lending decisions for both the lender and the borrower. To learn more about the process, click here.
So while the RBI may have had the best intentions when proposing regulatory oversight over underwriting algorithms, the truth is, and at the risk of sounding clichéd, it’s a change that can only come from within.
Tech-first companies must make it a part of their mandate to build AI-powered systems that leave little room for ambiguity and do away with age-old biases - after all, wasn’t that the point of moving from humans to machines anyway?