In an increasingly competitive environment, riddled by mounting risks, financial institutions need to move past their legacy technology systems to maximize their revenue and earnings.
Jean-Philippe Desbiolles, global vice-president for data, cognitive and AI for Financial Services at IBM said in a blog post, “The continued reliance on legacy systems is becoming a real issue, especially with regard to quickly scaling AI and data quickly.”
As institutions look to upgrade their technology systems, AI will facilitate the fastest, largest ROI, according to Desbiolles. AI/machine learning (ML) helps with decision support, fraud detection, personalization, and a host of other financial operations.
But not all AI/ML systems are alike. Here are five factors to consider when evaluating them:
1. Human-centric functionality
Human-Centric FunctionalityAI and ML are designed on human behavior. Is the system you’re evaluating designed to “think” like a human?
For example, if looking for keywords (across resumes, for example), does it look for supporting language surrounding those terms that give the system context. Context is critical to understanding ‘java’ in a barista’s resume versus ‘java’ in a software developer’s resume.
In other words, can it scan the resume like an expert recruiter? Does the AI understand and replicate for example, how a recruiter thinks? What a recruiter thinks? What the interactions between candidates and recruiters are? What the interactions between recruiters and hiring managers are?
Related: Artificial Intelligence (AI) and Risk Management Controls: How to Protect Your Financial Institution
2. Built by experts
There are a few different considerations here.
Was the system designed to be used by your management and staff or by technology experts? In other words, is the user interface simple to use and simple to learn?
Similarly, does it fit into your current workflow, or will you need to adapt to the new system? Has the taxonomy been built by linguist experts who understand natural language usage? Have the tools been built from the ground up? Has it been customized to meet your needs?
AI and ML systems for financial services are different from those designed for retail. Banks must ensure they are working with systems specific for them and to their needs.
Related: ABA Conference Highlights: 5 Takeaways for Promoting Risk Culture at Your Institution
3. Transparency by design
Machine learning can suffer from a “black box” problem. As models become increasingly complex, it becomes increasingly difficult to explain why a given outcome occurred. The importance of each data point in the decision-making process isn’t easily explained.
It is better to implement linear models instead of black box deep-learning. This approach requires smaller datasets and less learning time. It has the added benefit of being completely transparent.
Knowing what each data point is and how it’s weighted matters. Users trust what they can see and agree or disagree with. In the minority of cases where the system makes a mistake, transparency allows the cause to be obvious.
4. Users have control (Not the system)
It is also important to choose systems that are built for humans, meaning the system needs to listen.
Outstanding tools put the user in control, not algorithms. They enable the users to sidestep the algorithms by putting a greater emphasis on their decisions and activities. The system then learns from user feedback, making each future result more accurate than the last.
5. Mitigating bias
Unconscious bias comes in two forms: biased data and biased parameter tuning/feature engineering. Therefore, the system must include a diversity of source data to help ensure no single data gathering bias impacts the learning.
Financial institutions should also avoid topic modeling or any other algorithm that averages document contents to make a decision. Additionally, banks and credit unions should use linear models, enabling each feature to be audited by looking at the results.