<img src="https://ws.zoominfo.com/pixel/pIUYSip8PKsGpxhxzC1V" width="1" height="1" style="display: none;">

Regulating the Future: What Financial Institutions Need to Know About AI and Regulatory Risks

author
6 min read
Aug 6, 2024

The regulators have spoken: financial institutions need to get serious about managing AI risk. From cybersecurity and fraud to consumer compliance and third-party risk, AI poses significant regulatory risks. 

These regulatory risks are addressed in a recent report from the U.S. Treasury, enforcement actions against FIs with vendors using AI, CFPB and interagency statements about the dangers of AI algorithms in lending, a recent bill from the Colorado state legislature, and more. 

Should financial institutions include AI risks in their risk management frameworks? According to regulators, the answer is a resounding yes. 

Related: The Risks of AI in Banking  

Table of Contents

U.S. Treasury Report Outlines Steps for Addressing AI Risk in the Financial Services Industry 

Are you wondering exactly how regulators will address AI risk? Then the U.S. Treasury’s recent report on managing AI risk is a must-read. 

The report notes significant cybersecurity and fraud challenges associated with AI in the financial services sector. Many FIs already have first-hand experience with these challenges. A survey from the firm BioCatch found that 51% of financial institutions lost between $5 million and $25 million from AI fraud and cybersecurity threats in 2023. 

Related: AI Is Already Costing Financial Institutions Millions: Here’s How to Manage the Risks 

Large gaps exist in FIs’ ability to handle AI risk, and the U.S. Treasury offers a roadmap for regulating artificial intelligence in banking. 

Let’s examine some of the steps suggested in the Treasury’s report (alongside other statements and enforcement actions from the agencies) for a fuller picture of the current regulatory landscape of AI in banking. 

Expanding the NIST Framework to Include AI and Third-Party Risk  

The Treasury’s report recommends the National Institute of Standards and Technology (NIST) Cybersecurity Framework include governance and risk management standards specific to AI risk. 

Long viewed as the yardstick for organizations to evaluate their cybersecurity posture, NIST underwent its first major revision in over a decade earlier this year. One of the key changes in NIST 2.0 was its focus on third-party risk governance. 

This emphasis on third-party risk resonates with regulatory concerns regarding financial institutions’ partnering with vendors that use artificial intelligence. While not addressing AI specifically, an FDIC consent order against a New Jersey bank last year for fair lending violations was viewed by many as a warning to FIs offering third-party AI banking products. 

Under the order, the bank must seek regulatory approval before onboarding new vendors, putting immense pressure on its profitability and raising compliance costs. Other financial institutions should view this as a cautionary tale. Regulators have grown increasingly wary about partner banking, especially when it involves the use of artificial intelligence. 

Going forward, financial institutions must ask vendors about their use of AI technologies so they can assess any risks present in these relationships. They must understand that providers using AI should be treated like any other critical vendor and follow the recent Interagency Guidance on Third-Party Relationships: Risk Management in monitoring AI providers' high-risk activities throughout the lifecycle of the relationship. 

Peering Inside the Black Box

The U.S. Treasury Department is concerned with the decisions of black-box AI models, but they’re not the only agency invested in protecting consumers from verdicts made by artificial intelligence. 

For years, the Consumer Financial Protection Bureau (CFPB) has warned financial institutions that machine-generated, black-box algorithms can’t be the sole basis for approving or denying credit to applicants. Relying on algorithms for credit decisions may violate fair lending laws and doesn’t excuse lenders from providing human-generated Adverse Action Notices (AANs) that explain the reasons for credit denials. The stakes of credit decisions, such as mortgage applications, are too high to be entrusted to a computer program, according to the CFPB. 

Colorado’s recent AI bill (the first in the nation) also aims to address the discriminatory impact of high-risk AI systems. The law will take effect in February 2026, requiring anyone who deploys or develops AI models to make consequential decisions to complete an impact assessment and define their AI risk management policies. The law will apply to anyone conducting business in the state. 

Why all the fuss about high-risk, black-box AI models? Aren’t computers better equipped to make objective decisions free from human error? After all, artificial intelligence would seem less likely to grant a discriminatory pricing exception. 

It all comes down to the problem of explainability. Many current AI models exist in a black box, meaning that developers often don’t know exactly why systems come up with their decisions or answers. 

Generative AI is prone to hallucinations, sometimes providing wildly incorrect answers to simple factual questions. It’s one of the consequences of training AI using bad or incorrect data and another problem with black-box logic. As the saying goes, “garbage in, garbage out.” If you don’t know what went in, you can’t be confident in what comes out.  

The CFPB highlights the problem of financial institutions deploying AI chatbots to respond to consumer inquiries. The agency has received numerous complaints from banking consumers trying to receive straightforward answers to questions or resolve disputes, only to be met with AI bots that fail to respond appropriately. 

FIs using AI run into the operational risk of systems failure, noncompliance with consumer protection laws when bots spit out inaccurate answers, and data privacy protection violations if they use an open system that may leak sensitive consumer data. 

While it's feasible that financial institutions could build closed systems to prevent data leakage, the solution cited by regulators of explainability to prevent compliance issues and operational risk is not as easily solved. 

Generative AI wasn’t designed to offer explanations for its decisions. It was designed to generate outputs – however inaccurate these may be. Opening AI’s black box at this stage means staring into the void. 

Regulators are Getting on the Same Page on AI Risk 

Despite the handful of warnings from regulators regarding AI risk in banking, the agencies have yet to coordinate their efforts to address this threat fully. But they’re starting to make moves in this direction.  

A joint statement on the problem of bias in automated systems (or “so-called artificial intelligence”) issued by the CFPB, Department of Justice (DOJ), Federal Trade Commission (FTC), and Equal Employment Opportunity Commission (EEOC) identifies a wide range of potential harms – from fraud to privacy to fair competition. 

The statement calls for an “all-of-government" approach to enforce existing regulations to manage the threat of AI.  

Separately, the Office of the Comptroller of the Currency Fall 2023 Semi-Annual Risk Perspective recognizes AI in banking as an emerging threat. In a speech delivered by Acting Comptroller of the Currency, Michael J. Hsu, he identifies cybersecurity and fraud as the primary AI risks regulators and financial institutions must address. 

Finally, the United States Copyright Office began examining the issue of potential copyright infringement from the use of generative AI in early 2023. OpenAI, the creators of ChatGPT, has already faced a legal challenge from The New York Times regarding how it used the newspaper’s “data” to train its model. Financial institutions should pay attention to these developments to avoid potentially costly legal actions down the road. 

But the U.S. Treasury report recognizes that moving forward with AI regulation in financial services requires international cooperation, with legislators and regulators engaging foreign parties to address AI risk. 

So far, legislators in the U.S. have taken the path of least resistance, demonstrating their preference for voluntary compliance over more stringent legislative measures. This may soon change. As the technology evolves and fraud and cybersecurity become a more present danger, Washington will likely be forced to act. 

FIs Can’t Afford to Wait and See How AI Regulation Plays Out 

The statements and reports of the U.S. Treasury, CFPB, and other agencies make it clear that regulators want financial institutions to incorporate AI risk into their broader risk management programs. 

The Treasury’s report finds that “risk management programs should map and measure the distinctive risks presented by technologies such as large language learning models,”. Financial institutions are wrong in thinking that this is a risk they can put off. AI risk has already cost FIs significant monetary harm, even at institutions that don’t use the technology directly due to fraud and third-party AI use. 

With regulators indicating their intention to address AI risk seriously, financial institutions must create AI risk policies and processes around information security, compliance, and third-party risk management. 

Where should they begin? 

Model risk management tools on enterprise risk management systems empower financial institutions to incorporate processes and controls for AI risk into their larger risk management frameworks. Additionally, third-party risk management and compliance risk management systems will help FIs better understand their AI risk exposure from vendors and stay ahead of emerging AI regulations. 

Artificial intelligence is a powerful tool, offering ample opportunities to financial organizations. But FIs that plan to incorporate AI into their business models (and even those that don’t) must first assess the risks. 

Want more tips on addressing AI risk? Watch our webinar:

“Managing AI Risk: A Primer for Financial Institutions.” 

Watch the Webinar

Subscribe to the Nsight Blog