Assessing risk can feel like a subjective task. Many bankers make a gut call, acting as the institution’s de facto Magic 8 Ball. But you don’t need a fortune-telling toy to know that “outlook not so good” for this practice.
Guessing at risk assessments is a dangerous practice. Assessing risk is not a check the box activity. It’s an essential tool for mitigating risk and assessing controls. If you don’t put in the work to systematically and regularly evaluate risk, you’re creating even more risk.
Deep down you know this and so do examiners. That’s why they make the same request as your eighth-grade math teacher: Show your work.
Risk assessments aren’t nearly as subjective as they may appear on the surface. The FFIEC IT Examination Handbook glossary shows us exactly what we need to evaluate:
Notice there’s a theme (which I took the liberty of highlighting for emphasis). A risk assessment should evaluate both likelihood and impact.
Impact. The impact is an estimate of the harm that could be caused by an event. For example, a cyberbreach could have a catastrophic impact.
Likelihood. Likelihood is how probable it is that an event will occur. For example, a cyber breach seems a very likely occurrence when there’s no firewalls, anti-virus software, or intrusion detection software to prevent it.
The more likely or severe an event, the greater the risk as shown in this simple version of a Probability and Impact Matrix.
So how do you show your work? You need guidelines and scales.
Have one scale for assessing likelihood (ex: “high,” “moderate,” and “low”) and one for severity (ex: “catastrophic,” “significant,” “moderate,” “minor,” and “insignificant”) along with requirements for applying them. A guideline for probability might include frequency of audit findings. An audit finding from the past year may indicate a risk is highly likely/probable, while one from five years ago with no repeat findings may indicate an unlikely or remote risk.
These scales can easily be converted into numbers and plugged into equations for assessing inherent and residual risk. For example, high can be assigned a 3, moderate can be a 2 and low can be a one.
Inherent risk scores represent the level of risk an institution would face if there weren’t controls to mitigate it. For example, think of the risk of a cyberattack if the institution didn’t have any defenses in place.
The formula for inherent risk is:
Inherent risk = Impact of an event * Probability
Read also: 3 Tips for Avoiding an Equifax-Style Breach
Residual risk is the risk that remains after controls are taken into account. In the case of a cyber breach, it’s the risk that remains after considering deterrence measures. This score helps the organization review its risk tolerance against its strategic objectives. It’s all about understanding the relationship between risk and controls.
The formula for residual risk is:
Residual risk = Inherent risk * Control effectiveness
Residual risk is greatest when the inherent risk is high and the controls for mitigating the risk aren’t effective. It decreases when controls are effective.
That makes it important to have a method for determining how effective controls are. This comes down to two factors: the impact of the control and how likely it is to work.
This relationship can be expressed with the following formula:
Control effectiveness = Control impact * % ineffective
A control’s impact is the expected value of its risk mitigation. As with severity and impact, controls should be rated on a scale (ex: very important, important, or not very important). For example, a firewall can be very important for keeping out hackers because it covers the entire institution. A control’s effectiveness is the probability that the control will function as intended based on assessments. When assessing effectiveness, make sure controls are regularly monitored for trends to help understand if they are performing as expected.
Read also: How to Set Up a Risk Committee
If risk assessments seem like they require a lot of specific thought and analysis, that’s because they do. The goal is to remove as much subjectivity from the process as possible and create quantifiable metrics.
If you’re not carefully considering both impact and likelihood and demonstrating exactly how those factors influenced your assessment, examiners are going to question your methods.
You need clear logic behind your decisions and you need to show your work.