The data: AI is simultaneously an enormous contributor to sophisticated fraud and financial institutions’ most powerful defense, according to recent research citing financial institutions.
The study notes that AI-enabled fraud, including deepfakes and voice cloning, is rising. And 71% of respondents found that financial criminals and fraud rings were the main culprits. Meanwhile, 93% of respondents believed machine learning and genAI will “revolutionize” fraud detection.
Digging into the data:
- Fraud is most prevalent in digital channels: 80% of respondents said attacks occurred through online or mobile banking.
- Respondents most frequently said their top red flag from an attempted fraud event was inconsistent user behavior or device characteristics (28%).
- The most frequently reported fraud types overall were credit card fraud (20%), account takeover fraud (18%), identity theft (11%), and check fraud (11%).
- Over half of banks (56%) that reported catching fraud most often detect fraud at the time of transaction.
Zoom out: The average risk-detection and decisioning tools were developed before AI products were widely available or as sophisticated as they are today. But genAI tools have made scams easier and cheaper to scale. Consumers are at risk: Meaningful numbers of respondents had experienced AI-driven bank impersonations (28%), voice cloning calls (21%), and synthetic identity fraud (18%).
Our take: A sophisticated fraud strategy and a technology roadmap are fundamental to banks fighting AI-enabled and fraud ring–driven financial crime.
Banks appear to be on board: Planned investments in the next 12 months include identity risk solutions (64%), document verification software (49%), anti-scam education tools (38%), and biometric authentication (38%). As the volume and pace of fraud grow, banks should make a conscious effort and dedicated investment to keep up.