If you’re assuming your third-party service provider is following cybersecurity best practices because it’s smart business, think again.
Vendors are required to honor the terms of a contract and follow applicable regulatory guidance, but unless a contract clearly defines items like internal controls and incident response plans, you have no way of knowing how a vendor will handle a potential breach.
When controls fall short
Consider the case of a vendor alleged to have a problem with unauthorized access to several client institutions’ unencrypted sensitive information on its server in 2014. While an investigation from the FDIC’s Office of the Inspector General (OIG) ultimately found that the allegations were unfounded, it did uncover several issues that informed future guidance.
The problem began with suspicious activity caused by adware, malware that causes advertising to stay onscreen and often redirects searches. The vendor investigated the issue and found no proof that it had been the victim of a cyber attack or that data had been accessed by outsiders. It didn’t notify clients or regulators because there was nothing in guidance or its contracts that required it.
Yet the OIG still had concerns with how the situation was handled because best practices weren’t followed. The watchdog noted the vendor didn’t:
- collect or retain forensics information such as an image of the server or a copy of the adware.
- use computer activity logging controls that could have helped determine whether any data had been accessed or exfiltrated.
This failure to follow best practices meant that the vendor wasn’t as prepared to protect against and respond to cyberattacks as it could have been. The OIG concluded the vendor had “a poor internal control environment and a vague incident response policy.”
It also said that the contract could have done a better job of defining terms related to:
- incident response
- notification requirements
The OIG has since looked closer into third-party vendor contracts when it comes to cybersecurity, releasing a report last year. Those findings show that when it comes to incident reporting, most contracts require the vendor to inform its client of a breach, but don’t address details of how the vendor assesses and responds to potential incidents or reports them to authorities. Few included consequences if a vendor failed to meet incident response and reporting standards.
Key terms found in guidance often don’t make it into contracts, the OIG found. Worse yet, when they do they are rarely defined. As a result, contracts and their terms are often vague and hard to enforce. They expose financial institutions to increased risk because it makes it harder to successfully manage vendor business continuity planning and incident response.
Consider the term “timely notification of financial institution.” About 20 percent of financial institutions in the OIG’s study included the term in contracts but provided no definition, leaving the phrase open to interpretation. Another 43 percent provide a limited definition. Other minimally defined terms include “unauthorized access,” “security incident,” and “substantial harm or inconvenience.”
Some terms get entirely left out. This frequently includes “potential breach,” “significant disruption,” “material impact,” and “cyber event.”
What to do?
Don’t let vague terminology put your institution’s cybersecurity at risk. Define terms like “potential breach” in your contract, and detail exactly how much time a vendor has to report an incident. While a vendor won’t necessarily disclose every element of its cybersecurity, it should be able to make definitions clear and outline expectations for how breaches are handled.