Artificial intelligence is transforming financial services, but it’s also giving cybercriminals new and sophisticated tools to breach systems. Hackers are increasingly using AI to exploit vulnerabilities in security, conduct social engineering attacks, and bypass safeguards like multi-factor authentication. As both financial institutions and criminals become more adept with AI, the need for robust, forward-thinking defenses continues to grow.
Now the New York State Department of Financial Services (NYDFS) is calling out these risks in new guidance that highlights the importance of risk management and controls in mitigating AI cyber threats. While the guidance only applies to financial institutions operating in New York State (and doesn’t introduce any new requirements), it’s a valuable resource for financial institutions since it represents industry best practices.
It also hints at where future industry guidance might lead since New York has a reputation as an early adopter of financial services regulation.
Let’s look at the AI cyber threats that NYDFS is calling out and the actions it recommends financial institutions take guard against them.
Table of Contents
Realistic AI deepfakes (interactive audio, video, and text imitations of real-life people), are gaining traction as a tool to trick employees into divulging information by impersonating co-workers and customers. Deepfakes are used for phishing via email, telephone and text messages – and even fake video calls. These deepfakes are highly realistic and effective.
One worker in Hong Kong made international headlines when he was tricked into transferring more than $25 million to criminals after they used AI to emulate a video meeting with the company’s CFO and other colleagues in 2024.
Related: AI Is Costing Financial Institutions Millions
Cyber criminals are also using deepfakes to circumvent biometric-based MFA, including fingerprints, facial recognition, and voice. For example, fraudsters have targeted banking apps in Vietnam and Thailand leveraging deepfakes of customers' faces.
A Deloitte study estimates that generative AI – including deepfakes could cause fraud losses to skyrocket from $12.3 billion in 2023 to $40 billion by 2027.
AI can help cybercriminals identify and exploit security vulnerabilities and develop and enhance malware more quickly and extract data more efficiently. It can help identify security controls and find workarounds to get past them undetected.
From gathering and analyzing research on targets to quickly adapting and improving new techniques, AI enables cybercriminals to attack quicker. Between 2022 and 2023, the average amount of time it took cybercriminals manually exploring a system to move throughout the system and try to export data dropped from 84 to 62 minutes, according to the CrowdStrike 2024 Global Threat Report, in part due to the power of generative AI.
AI is a helpful tool for coding and writing, one that is lowering the barrier to entry for fraudsters.
Traditionally, one of the easiest ways to identify a potentially “phishy” email is spelling and grammar. Whether it’s a tech support scam, a fake delivery notice, or a Nigerian prince willing to pay for help transferring large sums of money, these scams have been notable for their typos and general disregard for spelling, grammar, and capitalization conventions.
Scammers can use AI to write emails more likely to pass as authentic, eliminating one tell-tale sign of a fraudulent email.
There is also the potential to use AI to write code to help access systems, making it possible for scammers that don’t have the best technical skills to launch cyberattacks. NYDFS predicts this will lead to more frequent cyberattacks in the future.
AI tools require a lot of data to train and function – making them attractive targets for criminals. This includes at both financial institutions and their third-party partners and vendors. Anywhere with large amounts of non-public personal information (NPI) is appealing to data thieves.
That means AI isn’t just a source of cybercrime. It’s an attractive target.
The best way to combat cyberthreats is with a risk-based approach, according to the NYDFS’s guidance, which reflects industry best practices. That means conducting regular risk assessments and implementing overlapping controls to mitigate risk.
Related: Creating Reliable Risk Assessments: How to Measure Cyber Risk
More specifically, it directs institutions to consider the role of AI when assessing cybersecurity risk, including:
Ongoing risk assessments are a must. Under NYDFS’s Cybersecurity Regulation, financial institutions operating in NY state need to update their cybersecurity risk assessments annually or whenever there’s a material change in cyber risk, including changes in business or technology. This assessment should include risks posed by AI. The information uncovered in the assessment should be used to decide if controls, including policies and procedures, should be changed or added to adequately mitigate risk.
The NYDFS guidance recommends “multiple layers of security controls with overlapping protections” to reduce risk. It’s a smart approach that creates an environment where even if hackers are able to overcome one control, there will still be others standing in their way.
Examples of AI cybersecurity risk controls include:
Related: Cybersecurity Breaches: How to Protect Your Financial Institution
As AI grows more sophisticated, so does managing AI-related cyberthreats. While AI provides financial institutions better tools to combat fraud and cyber threats, it also empowers the scammers and cybercriminals.
As financial institutions integrate AI into their operations, ongoing AI cyber risk assessments and regularly introducing and adjust controls for mitigating risks becomes absolutely essential. This includes understanding how AI is used internally and by third-party service providers, as well as recognizing vulnerabilities that could jeopardize data confidentiality, integrity, and availability.
Risk management software is a valuable tool for helping financial institutions manage these risks effectively. For instance, Ncontracts’ Nrisk platform provides tools such as an AI risk assessment template, making it easier for organizations to identify, understand, and address their risk exposure and evaluate controls. Vendor management solutions like Nvendor ensure your institution has a strong third-party risk management program that understands vendor use of AI and leverage contracts and vendor oversight to mitigate the risk.
Ultimately, managing AI cyberthreats requires not just vigilance but a commitment to continuously evolving your institution’s risk management practices. By leveraging AI-specific tools and adhering to industry best practices, financial institutions can mitigate the dangers posed by AI while reaping its benefits, ensuring they remain secure in an increasingly digital landscape.
What does an effective risk management solution look like?
Download our buyers guide and find out.