Nsight Blog | Ncontracts

How Is Your Financial Institution Managing AI Cybersecurity Risks?

Written by Rafael DeLeon | Oct 24, 2024 6:00:00 PM

Artificial intelligence is transforming financial services, but it’s also giving cybercriminals new and sophisticated tools to breach systems. Hackers are increasingly using AI to exploit vulnerabilities in security, conduct social engineering attacks, and bypass safeguards like multi-factor authentication. As both financial institutions and criminals become more adept with AI, the need for robust, forward-thinking defenses continues to grow. 

Now the New York State Department of Financial Services (NYDFS) is calling out these risks in new guidance that highlights the importance of risk management and controls in mitigating AI cyber threats. While the guidance only applies to financial institutions operating in New York State (and doesn’t introduce any new requirements), it’s a valuable resource for financial institutions since it represents industry best practices.  

It also hints at where future industry guidance might lead since New York has a reputation as an early adopter of financial services regulation.  

Let’s look at the AI cyber threats that NYDFS is calling out and the actions it recommends financial institutions take guard against them.  

Table of Contents

AI cyberthreats financial institutions need to know about

Leveraging AI deepfakes to gain system access

Realistic AI deepfakes (interactive audio, video, and text imitations of real-life people), are gaining traction as a tool to trick employees into divulging information by impersonating co-workers and customers. Deepfakes are used for phishing via email, telephone and text messages – and even fake video calls. These deepfakes are highly realistic and effective.  

One worker in Hong Kong made international headlines when he was tricked into transferring more than $25 million to criminals after they used AI to emulate a video meeting with the company’s CFO and other colleagues in 2024.  

Related: AI Is Costing Financial Institutions Millions 

Cyber criminals are also using deepfakes to circumvent biometric-based MFA, including fingerprints, facial recognition, and voice. For example, fraudsters have targeted banking apps in Vietnam and Thailand leveraging deepfakes of customers' faces. 

A Deloitte study estimates that generative AI – including deepfakes could cause fraud losses to skyrocket from $12.3 billion in 2023 to $40 billion by 2027. 

Faster, more powerful cyberattacks 

AI can help cybercriminals identify and exploit security vulnerabilities and develop and enhance malware more quickly and extract data more efficiently. It can help identify security controls and find workarounds to get past them undetected.  

From gathering and analyzing research on targets to quickly adapting and improving new techniques, AI enables cybercriminals to attack quicker. Between 2022 and 2023, the average amount of time it took cybercriminals manually exploring a system to move throughout the system and try to export data dropped from 84 to 62 minutes, according to the CrowdStrike 2024 Global Threat Report, in part due to the power of generative AI. 

Lowering barriers to entry in cybercrime 

AI is a helpful tool for coding and writing, one that is lowering the barrier to entry for fraudsters.  

Traditionally, one of the easiest ways to identify a potentially “phishy” email is spelling and grammar. Whether it’s a tech support scam, a fake delivery notice, or a Nigerian prince willing to pay for help transferring large sums of money, these scams have been notable for their typos and general disregard for spelling, grammar, and capitalization conventions. 

Scammers can use AI to write emails more likely to pass as authentic, eliminating one tell-tale sign of a fraudulent email.  

There is also the potential to use AI to write code to help access systems, making it possible for scammers that don’t have the best technical skills to launch cyberattacks. NYDFS predicts this will lead to more frequent cyberattacks in the future. 

Accessing AI data collections

AI tools require a lot of data to train and function – making them attractive targets for criminals. This includes at both financial institutions and their third-party partners and vendors. Anywhere with large amounts of non-public personal information (NPI) is appealing to data thieves.  

That means AI isn’t just a source of cybercrime. It’s an attractive target.  

Managing AI cyber security risk

The best way to combat cyberthreats is with a risk-based approach, according to the NYDFS’s guidance, which reflects industry best practices. That means conducting regular risk assessments and implementing overlapping controls to mitigate risk. 

Related: Creating Reliable Risk Assessments: How to Measure Cyber Risk 

More specifically, it directs institutions to consider the role of AI when assessing cybersecurity risk, including: 

  • How the financial institution uses AI technologies 
  • Third-party service provider (TPSP) use of AI technologies 
  • AI application vulnerabilities that could risk data confidentiality, integrity and availability  

Ongoing risk assessments are a must. Under NYDFS’s Cybersecurity Regulation, financial institutions operating in NY state need to update their cybersecurity risk assessments annually or whenever there’s a material change in cyber risk, including changes in business or technology. This assessment should include risks posed by AI. The information uncovered in the assessment should be used to decide if controls, including policies and procedures, should be changed or added to adequately mitigate risk. 

AI cybersecurity risk controls

The NYDFS guidance recommends “multiple layers of security controls with overlapping protections” to reduce risk. It’s a smart approach that creates an environment where even if hackers are able to overcome one control, there will still be others standing in their way. 

Examples of AI cybersecurity risk controls include: 

  • Cybersecurity policies and procedures. Policies and procedures establishing clear rules for AI system use and security. By enforcing consistent practices, they help mitigate vulnerabilities and ensure compliance with regulatory standards. 
  • Incident response, business continuity, and disaster recovery plans. Plans should address cybersecurity events, including those related to AI, and be tested.  
  • Senior leadership oversight of cybersecurity. This includes knowledge of your institution’s AI risk exposure and management. 
  • A robust third-party risk management (TPRM) program. The program should oversee AI and AI-powered products and services. At a minimum, TPRM policies should address access controls, encryption, due diligence expectations, and contractual protections. As always when it comes to vendors with access to sensitive data, extra attention to AI-powered providers with access to NPI. Contracts should require and clearly define timely notification of cybersecurity events involving your institution’s protected data. They should also address privacy, security, and confidentiality concerns. 
  • Access controls. Access controls like multi-factor authentication (required by NYDFS starting in November 2025) and strict controls that limit information accessible to users based on their job function. It also recommends limiting the number of accounts with widespread data access. Access should be withdrawn when it’s no longer needed. Remote access to devices should be restricted.
  • Cybersecurity training. Everyone at a financial institution, from frontline staff to the board, needs cybersecurity training to prevent AI-engineered and other social engineering attacks (a requirement in NY). If staff uses AI-powered applications, they should be trained on the importance of not disclosing NPI. If dealing with transactions, staff needs training on how to identify transactions that appear fraudulent and procedures to proper identify verification. Additional training on AI threats for cybersecurity staff is a must. If staff or a vendor is working with AI, they need to know how to secure and defend the system. 
  • Monitoring. Have systems to detect unauthorized or unusual access to systems and block access to malicious content and code. 
  • Data management. Threat actors can’t access information that doesn’t exist. Make sure unneeded NPI is disposed of promptly and properly and maintain data inventories so in the event of a breach you know what NPI and systems were breached. 

Related: Cybersecurity Breaches: How to Protect Your Financial Institution  

Proactive strategies for managing AI-driven cyber threats

As AI grows more sophisticated, so does managing AI-related cyberthreats. While AI provides financial institutions better tools to combat fraud and cyber threats, it also empowers the scammers and cybercriminals.  

As financial institutions integrate AI into their operations, ongoing AI cyber risk assessments and regularly introducing and adjust controls for mitigating risks becomes absolutely essential. This includes understanding how AI is used internally and by third-party service providers, as well as recognizing vulnerabilities that could jeopardize data confidentiality, integrity, and availability. 

Risk management software is a valuable tool for helping financial institutions manage these risks effectively. For instance, Ncontracts’ Nrisk platform provides tools such as an AI risk assessment template, making it easier for organizations to identify, understand, and address their risk exposure and evaluate controls. Vendor management solutions like Nvendor ensure your institution has a strong third-party risk management program that understands vendor use of AI and leverage contracts and vendor oversight to mitigate the risk.  

Ultimately, managing AI cyberthreats requires not just vigilance but a commitment to continuously evolving your institution’s risk management practices. By leveraging AI-specific tools and adhering to industry best practices, financial institutions can mitigate the dangers posed by AI while reaping its benefits, ensuring they remain secure in an increasingly digital landscape. 

What does an effective risk management solution look like? 

Download our buyers guide and find out.