Stay up to date on the latest vendor risk management news happening this month. Check out the articles below.
Recently Added Articles as of April 23
Managing AI risk in financial services calls for a TPRM mindset. AI adoption in banking and insurance is accelerating faster than regulatory frameworks can keep up. NIST and ISO offer structured guidance, but the compliance posture needs to be continuous and adaptive rather than a pass/fail checklist. Many AI solutions are external services, meaning institutions often lack full visibility into how they work — yet bear the regulatory and reputational exposure when they fail. Cross-functional governance across legal, compliance, risk, and operations is critical to manage the risks. Treat AI solutions with the same rigor as other vendors before deployment.
Insurance CROs name third-party dependency a persistent top-tier risk. Cybersecurity, strategic risk, and third-party dependency are dominant concerns for insurance CROs, according to a new study. Third-party and vendor cyber risk ranks among the leading cyber concerns, and the surge in third-party relationships has made resilience across ecosystems a board-level issue — with two-thirds of firms prioritizing vulnerability and vendor management.
The NAIC is building a registry for third-party data and model vendors used by insurers. At its Spring 2026 National Meeting, the NAIC's Third-Party Data and Models Working Group advanced a framework that would require third-party data and model vendors used by insurers to register with state insurance departments, with initial focus on vendors involved in pricing and underwriting. Whether registration would be mandatory or voluntary, and whether a model law should replace the current framework format, remain open questions. Panelists emphasized the importance of a cross-functional approach to AI governance and noted that vendors are becoming more open about their AI models and processes. There are concerns about the gap between large and small organizations implementing AI governance, as smaller insurers often have less resources available. Insurers need to ensure risk management processes address the entire lifecycle of an AI tool.
The OCC and federal banking agencies updated model risk management guidance. The OCC, Federal Reserve, and FDIC jointly issued updated interagency guidance on model risk management, replacing prior guidance from each agency. The updated framework is principles-based and risk-based rather than prescriptive and expects banks to tailor model risk programs to their size, complexity, and risk profile. Worth noting: generative AI and agentic AI are explicitly out of scope — not because regulators consider them low risk, but because they don't yet have a framework for them. Financial institutions are expected to govern those tools themselves, and the guidance makes clear that weak model risk management can still result in findings of unsafe or unsound practices.
Third-party AI tool compromise puts customers at risk. A cloud development platform was compromised after an employee used a consumer AI productivity tool and granted it broad "allow all" permissions. A limited number of customers had credentials compromised and were urged to rotate them immediately. The incident shows how AI tools requiring wide permissions to function create growing risks as AI tools become more popular.
Bank of Canada puts AI cybersecurity risk on the global agenda. The Bank of Canada’s governor warned that global financial systems need to urgently address risks from rapid AI advances. This comes on the limited release of Anthropic’s Mythos – a powerful new AI model for security being reportedly tested by multiple banks. The concern is that AI capabilities could affect cybersecurity, market stability, and operational resilience in ways that are still not fully understood — but it demands coordination among firms, regulators, and policymakers. Expect accelerating guidance, increased vendor scrutiny by banks, and growing emphasis on mandatory reporting or stress-testing of AI-related operational risks.
Over half of Americans already use AI to manage their finances. Plaid data found that over half of Americans used AI to manage their finances in the past year, and a similar share believe managing money without AI will soon feel obsolete. Among Gen Z and millennials, roughly half say they are more comfortable discussing finances with AI than with a human. Despite that confidence, about three-quarters of respondents said they want to know when AI is being used in financial decisions, and most expect organizations to reimburse customers for AI-driven errors.
A vendor breach tied to ransomware claims hits bank holding company. A bank holding company disclosed a third-party data breach impacting limited customer information. Most of the data was masked test data and there’s no evidence yet of unauthorized access. A ransomware group publicly claimed the organization as a victim and posted sample data, alleging access to a large dataset potentially including names, addresses, and account-related information.
Recently Added Articles as of April 16
Third-party analytics breach ripples across dozens of companies. More than a dozen companies are dealing with the fallout from a vendor breach. The ShinyHunters extortion group breached Anodot, an AI-powered cloud analytics platform, stealing authentication tokens that granted access to customer data stored in Snowflake environments.
Wealth management firms hit with class action lawsuits over data breaches. Two wealth management firms face class action suits over alleged failures to protect client data, joining a growing list of firms. One of the suits alleges ShinyHunters stole Salesforce records containing client PII and over 200GB of internal corporate data. On a more proactive note, FINRA launched a new Financial Intelligence Fusion Center — a secure portal for member firms to share and act on emerging cyber threat intelligence.
OneDigital warns nearly 28,500 clients of Salesforce data breach. OneDigital Investment Advisors notified clients that their data stored in Salesforce, the firm's CRM platform, was potentially accessed and copied by an unauthorized actor. The entry point was Drift, a connected chat tool, not the core Salesforce platform itself. The breach is part of a pattern of third-party and integration-layer attacks hitting financial advisory firms, with ShinyHunters linked to several incidents across the industry.
Vendor coercion is a governance problem for financial institutions. When access to business opportunities gets conditioned on adopting a vendor's technology, normal procurement and risk governance breaks down — and AI makes this more dangerous than ever. Security experts warn that tools adopted under pressure often bypass security review, create data exposure risk, and embed themselves in workflows before anyone has fully evaluated them.
The financial fallout from breaches extends well beyond the incident itself. The real cost of a data breach is the long tail of litigation, regulatory investigations, customer notification obligations, and reputational damage that can persist for years. A new report shows that losses from data-theft incidents were more than double those without, and that data theft now accounts for 40% of the value of cyber claims in the first half of 2025. Organizations are increasingly leaning on insurance brokers and peer networks to benchmark exposure and stay ahead of evolving threats.
Recently Added Articles as of April 9
AI adoption at wealth management firms is raising the bar for vendor oversight. As the SEC shifts toward principle-based oversight under Chair Paul Atkins, RIAs and wealth managers have more flexibility in designing compliance programs — but face less tolerance for ambiguity when things go wrong. AI amplifies risk across cybersecurity, vendor management, and data governance simultaneously, and regulators now expect firms to know exactly how AI is being used across their organizations, including by third-party providers. Exam requests increasingly include vendor lists, due diligence records, and technology governance documentation.
Insurance regulators are increasing AI vendor scrutiny. At the NAIC's 2026 Spring Meeting, regulators advanced a proposal to create a registry for AI model and data vendors used by insurers, signaling heightened scrutiny of third-party governance in underwriting and pricing. State pilot programs are already underway to assess how insurers use AI across business functions, with a focus on high-risk applications. Regulators also flagged growing concern around agentic AI systems, noting the difficulty of assigning accountability when errors cascade across multiple automated agents. Business continuity planning, not just breach containment, must be standard.
Regulators remove reputation risk from the supervisory playbook. The FDIC and OCC jointly finalized a rule prohibiting both agencies from criticizing institutions or taking adverse action against them based on reputation risk. That protection now explicitly extends to third-party relationships. Regulators can no longer require, encourage, or pressure an institution to terminate or modify a contract with a vendor based on that vendor's political views, lawful business activities, or perceived reputational exposure. The rule takes effect 60 days after publication in the Federal Register.
AI-powered attacks and supply chain vulnerabilities are evolving cybersecurity. Cybercriminals are increasingly using AI to run more convincing phishing campaigns, develop self-replicating malware, and manipulate AI models. Supply chain exposure remains a critical weak point, as a single vendor's compromised credentials can cascade into hundreds of millions in losses. With class action lawsuits, state AI legislation, and heightened executive liability all adding to the pressure, cybersecurity is a business, legal, and governance issue that demands continuous third-party oversight and tested incident response plans.
RIAs need AI policies and incident response plans ready for SEC exam season. With the SEC zeroing in on compliance program policies and procedures in its 2026 examination priorities, investment advisers that use AI tools need a formal policy covering appropriate use, client data protection, vendor due diligence, and recordkeeping. The more urgent deadline is June 3, when all SEC-registered RIAs must have a written incident response plan under Regulation S-P — one that outlines how the firm will assess a breach, contain and remediate the incident, notify affected clients, and ensure service providers report any security breach involving customer data within 72 hours.
GLBA reaches further than most vendors realize. Many organizations that serve financial institutions ¾ such as data brokers, call centers, or technology providers ¾ assume GLBA doesn't apply to them, but that assumption can be costly. Under GLBA, financial institutions must contractually push data security, confidentiality, and breach notification requirements down to every vendor handling customer information, effectively extending GLBA-level obligations into the vendor's environment regardless of whether the vendor is directly regulated. Vendors that cross into financial activity themselves by offering credit scoring, consumer wallets, or direct lending can trigger compliance obligations.
Open banking is expanding fast — and credit unions' third-party risk programs may not be keeping pace. The open banking ecosystem now includes more than 4,000 financial institutions and 10,000 fintech companies, with over 100 million consumers linking financial accounts. But that growth is outpacing governance. Unlike traditional vendor relationships with formal reviews and contractual safeguards, open banking is consumer-initiated, meaning data can flow through aggregators, platforms, and downstream service providers that the credit union never directly vetted. When something goes wrong, the institution typically absorbs the reputational and operational fallout, making continuous monitoring, sub-processor visibility, and strong consent management essential.
Vendor breach at an AI data company exposed proprietary training secrets. Meta suspended its relationship with Mercor, an AI data labeling and processing vendor, after a security incident potentially exposed closely guarded details about how the company — and several other leading AI labs — train their models. The breach highlights a fundamental vulnerability in the AI supply chain: the demands of training frontier models have pushed even large organizations to rely on specialized outside vendors, creating multiple points of compromise around some of the industry's most sensitive intellectual property.
Attack on a third-party support platform exposed customer data. Hims & Hers confirmed hackers broke into its third-party customer service ticketing system over a three-day window in February, compromising customer names, contact information, and support ticket data. The breach was a social engineering attack that tricked employees into granting system access.
Third-party ransomware attack compromises Nissan data. The Everest ransomware group claimed to have stolen 910 gigabytes of customer, dealership, and auto loan data after accessing a file-transfer system. Nissan said its own systems were not compromised, but the vendor's infrastructure reportedly had publicly exposed credentials that hadn't been rotated in years and no multi-factor authentication in place.
The riskiest vendors aren't always the ones you're watching most closely. Over-privileged access and weak everyday workflow controls can pose a greater threat than dramatic ransomware attacks. When it comes to third-party vendors, treat them as integrated parts of the supply chain rather than external counterparties. Keep a live vendor inventory, require subcontractor disclosure, and perform ongoing monitoring. Before onboarding any AI-native tool, ask detailed questions about data retention, model training practices, jurisdictional rules, and contractual liability — and require independent audits to verify the answers.
Recently Added Articles as of April 2
Three forces are escalating your third-party risk. Geopolitical conflict, AI-powered attacks, and cyber inequity across vendor ecosystems are converging to create an environment where well-defended organizations still suffer serious incidents. More than 35% of data breaches now originate from a compromised vendor or partner — not from any failure in internal controls. Organizations need to plan for incidents, assume a partner will eventually be compromised, and build coordinated response into their programs before disruption hits.
Smaller investment advisers face a June 3 deadline on Reg S-P. Registered investment advisers with less than $1.5 billion in assets under management must comply with the SEC's amended Regulation S-P requirements by June 3. The amendments require written incident response programs, 30-day customer breach notification, and formal oversight of service providers with access to customer data, including a 72-hour notification requirement if a provider experiences a breach. The SEC has named Reg S-P compliance a 2026 examination priority, so smaller firms should start preparing now.
Financial services firms need tested exit strategies. Static exit plans and generic documentation aren't enough when a critical supplier fails, underperforms, or no longer fits your strategy. Leading organizations are building scenario-specific strategies that distinguish between planned and stressed exits, continuously refreshing documentation as supplier models evolve, and embedding exit planning into business continuity and disaster recovery functions. Hidden sub-outsourcing chains and cloud dependencies remain a persistent blind spot. Without deeper dependency mapping, a rapid large-scale exit may not be as feasible as it looks on paper.
Investment advisers using AI face five key compliance considerations. As AI moves closer to investment decisions, regulators are shifting focus from conflicts of interest to fiduciary duty of care. The SEC's 2026 examination priorities specifically flag the use of automated investment tools and AI technologies. Advisers should be prepared to explain what their AI tools and vendors do and how they monitor them, document intended use cases and material changes, and assess how customer data flows through these systems under Regulation S-P, particularly as tools become more autonomous.
Service and support are the vendor criteria banks keep underweighting. Banks and credit unions under pressure to keep up with fast-moving technology often prioritize features over everything else when making vendor decisions — but that instinct can backfire. The ABA's most recent Core Platforms Survey puts average vendor satisfaction at just 3.19 out of 5, with core provider effectiveness scoring even lower at 2.78. When credit union leaders whose tech plans fell short were asked why, 53% cited insufficient vendor support. For community banks and credit unions already squeezed by competitive pressure, regulatory change, and AI deployment demands, evaluating vendors on service quality, client satisfaction data, case resolution times, and support team structure is critical.
Supply chain cyber resilience demands leadership, not just IT fixes. Supply chain attacks scale easily — compromise one vendor and you can reach hundreds of downstream networks. Yet only 16% of UK organizations brief their C-suite on cybersecurity monthly or more, leaving meaningful accountability gaps at the top. Building real resilience requires more than reactive patching: it means mapping root causes, maintaining clear supplier documentation, and embedding incident response coordination across the entire vendor ecosystem, including every supplier relationship.
Lloyds Banking Group data exposure hits nearly half a million customers. A software defect during an overnight update at Lloyds Banking Group allowed customers to briefly view transaction data belonging to other users, including account numbers and National Insurance numbers. Almost 450,000 customers were affected.
