Nsight Blog | Ncontracts

What is AI Washing and What are the Risks? |Ncontracts

Written by Monica Bolin, CERP, Manager, Enterprise Risk Management | Nov 21, 2024 8:00:00 PM

Artificial intelligence (AI) washing has become a heated topic as more companies tout their AI-powered products and services, from washing machines to new investment products. But are many companies overstating their use of AI? And if so, is it a big deal?

Some regulators think so. In March 2024, SEC Chair Gary Gensler shared his thoughts on AI and its use in finance, citing its potential benefits of “greater inclusion, efficiency, and user experience.” The same day, the SEC charged two investment advisers with “making false and misleading statements about their use of artificial intelligence.” Ultimately, the firms agreed to settle the SEC’s charges and pay $400,000 in civil penalties.

While AI washing is on regulators’ radar, AI-related risks go beyond false marketing practices. That’s one of the reasons the Fintech Open Source Foundation (FINOS) released its AI Readiness Governance Framework, which serves as a guide for financial institutions onboarding generative AI (GenAI) technology.

So, what are the risks associated with AI usage? What controls can financial institutions like banks, credit unions, mortgage companies, and others put in place to reduce and mitigate risk?

We’ll discuss all that and more. But first, what exactly is AI washing?

What is AI washing?

AI technology offers many benefits for FIs, from automating routine tasks such as data entry and compliance reporting to real-time fraud detection. For example, AI systems can analyze an account's transaction patterns and detect anomalies that indicate fraudulent activity—a win-win scenario for consumers and FIs.

However, there are significant risks when FIs use AI, including AI washing.

AI washing occurs when companies or organizations exaggerate or misrepresent their use of AI in marketing and communications. At its core, AI washing is a form of false advertising. For example, Amazon's AI-powered Just Walk Out technology in its Amazon Fresh grocery stores was scrutinized when reports circulated that the e-commerce giant employs more than 1,000 employees to manually check more than 75% of the transactions. Amazon denied the accusations and later released a blog post outlining its use of generative AI, computer vision, and sensor fusion to "invent checkout-free technologies."

Outside of this practice's risks, AI washing can have a dual effect. While some consumers are excited about the potential of AI technology, others who are generally cautious about adopting emerging technologies may be skeptical. This division has led some financial institutions to embrace AI while others remain wary of its implications.

Before using AI, the board and management team should determine whether the current state of AI technology aligns with an institution's mission, vision, values, and risk appetite.

Related: Q&A: The Future of Artificial Intelligence and Contract Management

Is AI washing a major risk?

The SEC isn’t the only regulator cracking down on AI washing. The Federal Trade Commission (FTC) recently announced five cases “exposing AI-related deception.” The cases are of the FTC’s Operation AI Comply, a new law enforcement sweep focused on companies using AI to deceive or harm consumers.

But AI washing isn’t just a problem in the U.S. Rules and laws covering AI washing already exist in the UK under the Advertising Standards Authority’s (ASA’s) code of conduct. The Canadian Securities Administrators (CSA) recently released the CSA Staff Notice 51-365, which warned issuers to “exercise caution in using broad terms” and if they do, they must be able to substantiate their claims. In other words, institutions and companies must be able to prove their AI usage and provide details on how it’s used in their products and services.

How to address AI washing risk

One of the most significant risks associated with AI washing is third-party risk. FIs must thoroughly evaluate their vendors' AI solutions to ensure their claims align with what their technology truly offers. This evaluation includes understanding the specifics of how AI is integrated into products or services and being aware of any potential risks associated with those technologies. After all, regulators, including the Office of the Comptroller of the Currency (OCC), the Federal Deposit Insurance Corporation (FDIC), and the Federal Reserve, have stated that banks are responsible for the actions of their vendors.

Due diligence is also critical in reducing the potential for AI washing. Before publishing or promoting any marketing campaigns or marketing materials that mention or suggest the use of AI, FIs should run the content by the compliance department. Compliance team members should review the content for accuracy and clarity of phrasing and disclosures, the intended audience, content (including diversity), and delivery avenues. If vendors distribute the marketing materials, your FI's compliance team should also review their distribution methods and any system data points or parameters used to determine the audience.

Related: The Ncast Podcast: AI and Risk Explained

Exploring other AI-related risks

While AI washing has received significant attention from the media and some regulators, this trend is largely rooted in the obstacles and opportunities inherent in the technology itself.

Generative AI, or GenAI, is one of the most used subsets of AI, but it also comes with significant risks. When creating text, images, videos, and other content based on generative AI, ensure robust quality control processes are in place to verify the accuracy and reliability of its inputs.

Another critical risk is algorithmic basis. If not managed carefully, AI systems can perpetuate or even exacerbate biases within data sets, leading to discriminatory outcomes. For example, an FI that uses AI technology in its lending systems should evaluate its lending processes and outcomes for disparate impact to comply with HMDA, CRA, and other regulations.

Related: The Risks of AI in Banking

What controls can FIs put in place to reduce AI risk?

Let’s explore more AI-related risks and ways to mitigate these risks in more detail.

Risk: Information Security

Information security focuses on mitigating harm from data breaches and other cyber vulnerabilities. The rise of AI technology has increased information security risks, including AI cybersecurity risks, where bad actors leverage AI to gain access to systems and produce more efficient cyberattacks.

Controls

Suggested controls for managing information security risk include:

  • Comprehensive risk management program. This control applies to all potential AI risks. Ensure your FI’s risk management program has established assessments and internal controls (including reporting processes and functions for risk management, compliance, and internal audit) that align with your institution’s risk profile. This program should be documented with key data regularly reported to board members to ensure the program accurately reflects your institution’s risk appetite, risk tolerance, and other risk limits.
  • Rigid risk management of AI models and tools. AI should be a focus within your FI’s broader risk management program. Critical components of an AI risk management program include due diligence and risk assessments; qualified staff for risk accountability; an inventory of AI uses and associated risks; defined parameters and a policy for AI use; a validation process for sound and unbiased results; effective technology controls (access, integrity, monitoring, etc.); internal audits and reviews; regular staff training; and change management processes for updates to AI tools.

Related: Essential Risk Assessments for Financial Institutions

Risk: Consumer data security and privacy

A major AI risk is the dissemination of consumers’ personal information. While your FI may not be actively sharing this information, this risk also applies to your third-party and fourth-party vendors.

Controls 

Here are some controls for managing consumer data properly in AI environments:

  • A strong vendor management program. Your FI's vendor management program should include comprehensive policies covering information security, the Gramm-Leach-Bliley Act (GLBA), and cybersecurity. Institutions should maintain an inventory of vendor services and conduct regular reviews for critical vendors (Tier 1). Additionally, Tier 2 vendors should undergo supplementary reviews, and there should be assessments of potential risks posed by fourth-party vendors and beyond.
  • Secure storage policies. This control applies to sensitive information on computer systems, physical media, and hard-copy documents. Procedures to protect this data include physical controls, logical controls (passwords, biometrics, etc.), and environmental controls (fire, destruction, and flood protection, for instance). Stored information should also be classified and inventoried properly so it can be retrieved or destroyed as needed.

Looking Ahead: The Future of AI Risk

Currently, there are no comprehensive regulations governing financial institutions' use of AI. However, managing AI risks is essential for these systems' responsible development and application.

Ensuring your institution's AI solutions align with your core values and strategic objectives is essential. The Artificial Intelligence Risk Management Framework (AI RMF 1.0), published by the U.S. Department of Commerce in collaboration with the National Institute of Standards and Technology (NIST), and the FFIEC Information Technology Examination Handbook for Architecture, Infrastructure, and Operations are valuable resources for financial institutions navigating AI-related risks.

Building transparency and trust is crucial when implementing this emerging technology across products, solutions, or features. Communicate how your institution or its vendors are utilizing AI capabilities. Doing so will foster trust with consumers and avoid the dangers of AI washing and other risks linked to AI usage.

Need help navigating AI risk? Watch our webinar:

“Managing AI Risk: A Primer for Financial Institutions.”