<img src="https://ws.zoominfo.com/pixel/pIUYSip8PKsGpxhxzC1V" width="1" height="1" style="display: none;">

How to Manage Third-Party AI Risk: 10 Tips for Financial Institutions

author
7 min read
Jan 23, 2025

The financial services industry is at an artificial intelligence (AI) inflection point. The advancement of AI technology and the third-party services (vendors) that offer them offer an array of exciting products, solutions, and other opportunities to save time and money moving forward. 

However, these new opportunities come with the potential for risks, including data privacy and compliance issues. 

Let’s explore some of the most common third-party AI-related risks in financial services and how your financial institution can better navigate the fast-evolving AI landscape.  

Related: Webinar: Managing Third-Party AI Risk: What You Need to Know Today 

AI risks explained 

Before financial institutions (FI) can address AI risk, they need to understand common AI vendor risks.  

  • Data risk. Anytime your FI shares data with vendors, the data is at risk of being shared with unauthorized parties or exploited. Additionally, the data quality within AI models varies.  
  • Compliance risk. It’s not unusual for a vendor, especially new and inexperienced vendors, to fail to follow recommended interagency guidance or regulatory and institutional policies regarding AI, which puts your organization at risk of noncompliance. Remember, your vendor’s compliance risk is your organization’s risk. 
  • Operational risk. Operational risks are vulnerabilities due to failures in processes, systems (including those provided by vendors), and the people using those processes and systems. When a vendor faces an AI-related data breach or system outage, the FIs using that technology will also be affected.  
  • Black box-related risk. Many AI models exist in a black box, meaning developers don’t always understand why systems come up with their decisions or answers. For example, if a lender relies on machine-generated, black box algorithms to approve or deny credit applications, they may unintentionally violate fair lending laws.  

Related: TPRM 101: Top Third-Party Vendor Risks for Financial Institutions 

Tips for managing third-party AI risk

While third-party AI risk shouldn’t be underestimated, there are some action steps FIs of all sizes can take to ensure they properly mitigate and manage risk: 

1. Ensure AI systems are transparent  

Whether AI systems and solutions are used internally or via third-party relationships, it’s crucial to understand how AI is used across your organization. Regularly assess AI decision-making processes, such as automated credit scoring and loan underwriting systems and AML/CFT transaction detection services, to ensure decisions are not based on biased or incorrect data—which leads us to our next actionable insight. 

Related: Credit Risk - You’ll Take the Blame If Your Vendor Doesn’t Have the Credit 

2. Beware of bias  

One of the main critiques of AI use in financial services and other industries is bias. AI, specifically generative AI (genAI), systems can produce false, biased, or otherwise unacceptable results at any point. While this data can lead to operational and compliance risks, it can also cause reputational risks 

AI washing, which occurs when companies or organizations exaggerate or misrepresent their use of AI in marketing or communications, can also result from a vendor-related incident. For example, suppose a vendor distributes your FI’s marketing materials. In that case, issues can arise if the system’s data points or parameters are incorrect and the distribution methods don’t follow compliance guidelines. 

Ensure your FI has a monitoring process to detect bias and other issues with AI systems. Implement mechanisms to detect unacceptable results early so your FI can take corrective action immediately.  

Related: Regulating the Future: What Financial Institutions Need to Know About AI and Regulatory Risks 

3. Know your risk appetite  

Not all FIs will take—or should take—the same approach to vendor relationships and AI. Knowing your FI's risk appetite is part of safely using AI, as well as other services and technology.  

An institution's risk appetite, determined by its board, should be documented in a statement that guides strategic decision-making and resource allocation. This statement typically outlines the institution's tolerance for risk in key areas like lending, technology, and operational risk. 

Institutions should set a risk tolerance that accounts for both the benefits and the risks of AI use in specific areas. For example, AI used in credit scoring or decision-making should be within the institution's risk tolerance, especially if errors could lead to consumer harm.

Related: ERM 101: What’s Your FI’s Risk Appetite? 

4. Establish control monitoring 

A risk management control—a measure, process, or mechanism put in place to mitigate risk—reduces the likelihood of a risk event occurring or minimizes the impact if it does occur.  

FI should implement AI controls, monitor their deployment, and adjust them as needed. For example, an FI could set up a committee to oversee ethical AI use or have a human review before any AI-suggested action. 

Related: Risk Management Controls in Banking 

5. Foster ongoing training and awareness 

AI is constantly evolving. Over time, we’ll continue to see AI technology advance, but with every advancement comes new barriers. That’s why teams involved in AI system deployment must continually train in the latest best practices and ethical considerations as part of the organization’s broader compliance training program 

FIs should regularly assess AI usage internally and with vendors to ensure an understanding of data usage.

Related: 5 Mistakes That Will Sink Your Compliance Training Program 

6. Refer to the experts

If your FI recently integrated AI or plans to use it soon, the National Institute of Standards and Technology’s (NIST) AI Risk Management Framework is an excellent resource. The core components for AI risk management outlined in the framework include:  

  • Governance. FIs should define clear oversight roles for managing AI systems. 
  • Risk Identification: FIs should regularly assess risks like bias, data privacy, and performance drift (i.e., a gradual shift away from established thresholds). 
  • Ongoing Monitoring: FIs should continually monitor AI systems to stay aligned with business objectives.

Ncontracts also offers various resources, including vendor management and contract management, for FIs as they begin their AI vendor journeys. 

Related: The Vendor Management Solution Buyer’s Guide  

7. Take advantage of model risk management for AI

A risk assessment is the first step in the risk assessment lifecycle, a multi-step process that includes audits, findings, and actions based on those findings. Before performing a risk assessment, especially for a new risk area such as vendor AI risks, it is helpful to refer to a model risk assessment provided through a knowledge-as-a-service (KaaS) provider like Nrisk 

As you examine your FI’s AI risk management framework, consider the key elements of model risk management, which can also be applied to AI: 

  • Validation: Clear documentation, quality data, and regular testing of models to ensure they are accurate  
  • Governance: Clear oversight of models to manage risk and ensure compliance  
  • Monitoring: Ongoing monitoring to detect any drift or performance issues in models 

Related: Whitepaper: Creating Reliable Risk Assessments 

8. Vet vendors properly 

Too often, FIs don’t perform due diligence on their vendors, and by the time they notice red flags, the relationship is on a downward spiral. Or worse, the FI’s operations, compliance posture, reputation, finances, and consumer relationships are ruined because of the partnership. 

Before using a vendor’s services or solutions, you should consider its finances and compliance posture, evaluate its security systems, and study how the vendor handles nonpublic information to ensure data security.  

Below are some questions to ask potential vendors about their use of AI: 

  • How are you ensuring your AI systems and solutions are transparent, fair, and accountable? 
  • How can you ensure my FI’s data stays protected from breaches or unauthorized access? What security measures do you have in place? 
  • How are you ensuring compliance with regulatory and institutional policies now and in the future?  
  • What is the emergency protocol in case of a data breach or other unexpected event? How soon will our FI be made aware once the event is detected?  

Another valuable tool is the Financial Services Information Sharing and Analysis Center’s (FS-ISAC) Generative AI Vendor Risk Assessment Guide. The guide aims to simplify the vendor due diligence process and includes a structured assessment model and a vendor questionnaire that facilitates reporting and record-keeping. 

Related: Webinar: Mastering Vendor Tiering  

9. Draft and update vendor contracts as needed

If your FI notes red flags before or after a vendor relationship has begun, there are ways to address the issues.  

When drafting contracts and service level agreements (SLAs), specify requirements mandating transparency and accountability around AI use. If an existing contract exists, consider addendums to address new AI-related risks or draft new agreements, as needed. 

If the vendor hesitates to share the information, frame the needed transparency as a mutual benefit to enhance trust and collaboration. Remind the vendor that regulators are carefully evaluating FIs for vendor-related violations and that being proactive could save both parties headaches (and even penalties) in the long run.  

Related: How to Develop an SLA for Third-Party Providers 

10. Use third-party AI solutions effectively

While we’ve discussed the risk associated with third-party AI relationships in depth, it’s also important to remember why your FI is using AI in the first place. AI can significantly enhance your FI’s strategic capabilities, saving you valuable time and resources. FIs adopting AI are seeing the benefits of increased productivity. The McKinsey Global Insitute estimates that genAI could add between $200 billion and $340 billion in value annually, or 2.8 to 4.7 percent of total industry revenues, across the global banking sector. 

Third-party AI tools are also helping FIs uncover new opportunities. For example, predictive analytics, which uses data to forecast future events, can identify new market opportunities and customer needs. By analyzing large data sets, AI can identify trends and patterns that may not be immediately visible, allowing institutions to tailor their offerings and approach. 

Related: Q&A: The Future of Artificial Intelligence and Contract Management 

Make the most of your third-party AI relationships

Managing AI and vendor risks can be a cumbersome task, but the right vendor management solution can help your FI stay informed about your vendors and their activities so you can mitigate risk and act with confidence. 

Want to learn more about vetting and onboarding technology vendors?  
Download The Ultimate Guide to Fintech and Third-Party Vendor Onboarding. 

Download the Guide


Subscribe to the Nsight Blog