Artificial intelligence holds vast potential for financial institutions. From risk management and compliance to underwriting and other areas, AI offers both risks and opportunities, especially for community financial institutions.
Earlier this year the federal banking agencies (CFPB, FDIC, Federal Reserve, NCUA, and OCC) set out to learn more on the subject, putting out a request for information (RFI) on AI, including machine learning.
Read also: What is a Third Party?
The agencies wanted to understand:
As a company focused on risk management, Ncontracts was excited to contribute to the conversation. We’ve seen firsthand how AI has begun to influence the industry, our customers, and risk management practices, and we know how important it is to always be looking ahead to ensure the financial system is prepared, protected, and positioned to benefit from advancements in technology.
Our comment letter is available for you to read on the following agency websites:
We offered up in-depth answers into the challenges of adopting AI in areas like risk management, vendor management, compliance and fair lending, while highlighting future AI applications that could make FIs more efficient and effective.
But just in case you don’t have time for a seven-page treatise, here’s a few highlights, including key challenges facing FIs that implement AIs developed or provided by third-parties.
When using AI developed or provided by third -parties, the biggest challenges facing FIs are data and cost.
Data. AI vendors often provide FIs with pre-built models by pooling together an institution's FI’s data with data from other FIs. This is often necessary because a single community FI may not have enough data on its own to build a robust model—but it also means that the FI is relying on other FIs’ data from other financial institutions. If that data has problems or reflects unknown biases, those issues will influence the model and can result in discriminatory lending practices.
Even when a third-party vendor uses the FI’s own data to produce a model, the FI typically doesn’t own the model. Vendors either do not want to explain their proprietary algorithms or how the data is crunched, or they cannot explain it due to the inherent nature of deep learning algorithms. (Small FIs are particularly at a disadvantage when seeking insights because they have little leverage.) This makes it hard for FIs to understand the model or demonstrate how it works to regulators. It also creates a business continuity issue. If the AI vendor is unable to perform, the FI will not have access to the model it has been using.
Consider the Bank Secrecy Act (BSA) requirement that an FI provide a risk rating for customers based on suspicious activity and currency transactions. When there is no access to a vendor’s model, an FI will have a hard time ensuring the correctness of the model and won’t be able to easily defend itself when an examiner challenges the model or the appropriateness of the FI’s due diligence. Further, while most FIs spend tremendous amounts of resources on anti-money laundering (AML) because there have been huge fines associated with this space, but most community-based institutions may not be able or willing to spend on AI in areas that are not considered profit centers, such as compliance or risk management. Costs are high and there are only so many resources to go around.
Related: 5 factors to consider when evaluating AI/machine learning
Third-party due diligence. Third-party due diligence is also a challenge. Even if a vendor is willing to show how its model works, many FIs don’t have staff with the expertise to understand how the models were created or identify any potential biases. Experienced data scientists are expensive and in short supply, making them out of reach for most FIs.
Data security. FIs also face data security risk when using a third-party AI provider. The computers needed to crunch the huge amounts of data required are very large, and most community FIs need to outsource this function. That means an FI must allow their data to leave the network.
For example, if an FI shares its complaint management data, that may involve allowing a vendor’s AI to read every email that comes into the FI since a complaint can come from anywhere. This requires FIs to open data to third-party vendors in new ways, potentially exposing even more sensitive data to increased risk. As a result, every AI vendor becomes a critical vendor. This risk is deepened if a third-party vendor outsources activities and shares the FI’s data with a fourth-party vendor. FIs need to know their data exposure footprint and ensure protection in every location.
Data sharing. There are also questions about how to share the data. Will legacy systems be able to share data or will FIs need to build specific API integrations, which is an expensive investment? FIs will also have to decide if they are comfortable with letting AI partners integrate with their systems and cores (which is a significant data governance issue) or whether they prefer to push data so they can control which data leaves.
By bringing up these issues now, we hope that they can be taken into consideration as AI continues to develop. We’ve all seen how poor vendor oversight can lead to data breaches and compliance violations. The more the financial institutions, regulatory agencies, and vendor partners are thinking about these issues, the more strategic they can be in addressing them.
AI is a huge opportunity, but it’s not without risk. Identifying these risks and potential mitigation strategies early can help inform how AI products and services are built and generate the most value from a promising technology.
Related: Artificial Intelligence (AI) and Risk Management Controls: How to Protect Your Financial Institution