Fintech for the Financially Excluded?
Financial technology has the potential to help lift millions out of poverty. But are we adequately assessing its risks?
The potential for financial technology, or fintech, to help the financially excluded populations of the world is well documented. As two billion people are still without bank accounts, savings, loans, and access to payment services, fintech is indeed a welcome innovation. At Bamboo Capital Partners, we believe fintech can help low-income people reduce vulnerabilities, build assets, manage cash flow, and increase income, and we have invested as such: In the last couple of years, we have made four equity investments in fintech companies in Colombia, Mexico, Chile, and Tanzania, committing more than $16 million. Our investees are helping democratize access to finance through peer-to-peer lending platforms (KuboFinanciero), promoting access to insurance (ComparaOnline), enabling mobile payments and savings for low-income people through nano deposits (Movii), and providing a smart data platform for emerging market financial institutions (First Access).
Yet fintech doesn’t come without risk. Artificial intelligence (AI) failures, personal data mining, hacking, identity theft, and aggressive digital credit offers affect not only the rich and hyper-connected, but also low-income customers. A rural villager using a basic mobile phone is also exposed to fintech risks. So uncovering the potential negative impacts of impact investing is crucial for these customers.
Low-income customers produce less of a digital trail than do higher-end users, but factors such as low literacy and numeracy, low awareness of data protection rights, little representation, reduced options, and reduced assets and savings compound their vulnerability to abuse.
AI and the loss of high-touch customer service
A “high touch” customer service is a service that has high human involvement, as is the case with traditional microfinance transactions. Anyone who accompanies a loan officer while visiting clients will witnesses the high level of human involvement. Human transactions are based on trust: The borrower trusts that the lender will conduct a fair assessment of their credit worthiness, and will respect the terms and conditions of the deal. The lender trusts the borrower with their money, along with the principal and interest. Every day billions of transactions are sealed with a handshake, a signature, and an eye-to-eye exchange. The human touch is particularly important for low-income customers, where faith in the individual is greater than the faith in an institution.
So what happens when a customer obtains a loan through a faceless device instead? A digital-only transaction redefines the trust relationship and the commitment on both ends. Moreover, when a digital loan is granted through a mobile provider, there are no longer just two but three parties involved. None of them sees the other. Proponents of algorithm-based lending argue this eliminates the subjectivity factor in decision-making, replacing it with data-based decisions. But digital transactions with automated on-boarding may result in excessive standardization. The repayment capacity analysis may be lax or replaced by AI-driven algorithms. Which of the two delivers a better, fairer judgment: an algorithm or a loan officer? What we know is that debt burden and repayment capacity must be adequately scrutinized. If this is not the case, it can lead to over-lending and customer over-indebtedness, or rejection of a loan based on opaque reasoning, including arbitrary profiling based on factors such as location. As frictionless financial services are increasingly targeting those in the low-income bracket, it is paramount to ensure that we don’t overlook suitability principles such as “sell only what the clients can use and need.”
The value of transparency in finance
Another important aspect of client protection in both traditional finance and fintech is transparency. Algorithms are valuable commercial property that are rarely disclosed. Traditional lending policies are more transparent. Just as a T-shirt label doesn’t have enough space to list the CO2 emissions level of its production, there is limited space on mobile devices to disclose information regarding terms, use of personal data, default consequences, and grievance mechanisms. At times customers are not even aware that they have consented to a loan. AI can lead to automatic blacklisting from credit bureaus for which repair is difficult, costly, and slow. Some reports indicate more than half a million people are blacklisted in Kenya for amounts as small as one US dollar—and unfortunately, they will not obtain any loans until they are cleared (if and when).
In digital lending, when the customer’s loan request is rejected by AI using “alternative data”—which may include geolocation, frequency of SMS use, phone charging, medical records, or, for the more Internet-savvy, browsing history, social media profiles, and online purchasing records—what recourse does the customer have? Who is behind this automation-based decision? Redress in case of AI errors may prove harder to obtain.
There are other risks affecting client protection in fintech. These include abuse and breaches of personal data and security, including privacy or failure to obtain prior consent; technology and network risk; deficient customer identity authentication; misuse of passcodes; and weak regulatory frameworks and poor law enforcement and redress.
Ensuring ethical behavior from behind a screen
Ensuring ethical behavior from a sales person who does not see their client in person is hard. What happens when the sales person is under pressure to deliver aggressive targets? Suppose you receive two conflicting instructions: a) Place a large amount of money this month, and b) do it carefully and with good judgment on customers’ credit worthiness. Aggressive targets mean there will be a trade-off between the two instructions received. Will fintech reduce or exacerbate aggressive sales targets? It may be too early to say. When in conflict, most of us behave ethically only when observed. Case in point: In a recent paper, Bibi Mehtab Rose-Palan cites the example of Wells Fargo, where at least two million deposit and savings accounts were opened in the names of customers without their consent. Rose-Palan concludes the “morality diminishing” factor that led the employees to conduct this fraud was the aggressive sales targets set by the same company that called on them to behave ethically. This is a critical responsibility of investors: Set unrealistically high targets, and you will encourage staff to take behavioral short cuts. For example, if you don’t monitor staff behavior (did they treat your client well?) but do monitor staff performance (did they reach the sales target?), most employees will likely focus on the sales goal. If you instead promote ethical leadership across the board, senior management, and sales force, you will reduce this risk.
Focus on the end user
As investors and providers of fintech, we have the responsibility to identify, acknowledge, and mitigate potential negative risks on the very population we aim to serve. The microcredit industry frequently overlooks risks like over-indebtedness. This is not a call to halt innovation. Fintech will be the solution to the last mile of financial inclusion. But responsiblefintech is what we want. Recent years have seen a number of fintech consumer protection initiatives surfacing. Of particular relevance to investors are guidelines for responsible fintech. The ability to question our assumptions, and check where and how things might go wrong, are characteristics of responsible players. So is staying focused on the end user—taking account of the specific vulnerabilities of the low-income customer, remaining accountable primarily to them, and exercising respect and good judgment. Remember, AI cannot substitute for human empathy and human judgement.