Combating payments fraud isn’t likely to get easier this year, and some professionals acknowledge it could get tougher as artificial intelligence spawns new threats.
U.S. consumers and businesses remain juicy targets for fraudsters rolling victims by way of duplicitous impostor scams, check washing operations and synthetic identity schemes, among other ploys. The increasing acceptance of digital assets in the marketplace, as the Genius Act brings stablecoins into payments, may provide yet another avenue for taking advantage.
Criminal use of artificial intelligence, with its deepfake potential to sharpen fraudsters’ tricks, is expected to spool up in 2026, even as the payments industry embraces AI for agentic commerce. As companies jockey for an AI edge, they’ll teeter between positive prospects and perilous possibilities in a period of transition.
“Where there is confusion, there is opportunity,” industry consultant Peter Tapling said regarding AI’s arrival. “It’s moving fast, and it’s going to take time,” he said during an interview Monday. He explained that the AI age has set in quickly for the industry, but is still developing.
Classic fraud types, such as check tampering, remain problematic despite President Donald Trump’s executive order last year forcing the federal government to stop sending paper checks. About two-thirds of businesses encountered check fraud in 2024 and it was the method most frequently cited for fraud that year, according to the latest survey on the subject from the Association for Financial Professionals.
Nonetheless, this year the industry will likely be focused on new, emerging threats. The payments industry has become arguably obsessed with AI-powered agentic commerce, and the possibilities it poses for creating agents that will not only shop, but also buy goods and services on behalf of consumers and businesses. Card networks, processors and fintechs are racing to harness the increased payments volume potential.
At the same time, AI presents dangers for the industry. Criminals have already made big strides industrializing their fraud schemes and scaling those attacks. Now, AI may allow for directing even more believable come-ons at victims, seeking to dupe them into sending money under false pretenses.
How AI is benefiting fraudsters
Bad actors are increasing their personalization in payment cons and better mimicking of trusted contacts to be more convincing, said Colin Parsons, who is head of fraud product strategy at the financial software firm Nasdaq Verafin. That’s true even for savvy individuals, he said.
This growing sophistication shows up in fraudsters’ use of synthetic or stolen identities to open accounts; creating fake bank websites to elicit customer information; and in impersonating grandchildren to dupe the elderly, he said.
“Those situations just continue to become more sophisticated, and harder for an individual to understand that they’re not actually interacting with the person that they think they are through the chat, or through a phone call, through email,” Parsons said in an interview this month.
The impact of AI on payments is complicated by tandem trends happening in payments, said Forrester analyst Lily Varon, noting agentic commerce and the evolution of passwords. These trends sometimes challenge the existing rules for payment authentication, authorization and fraud liability, among other core principles, in the payments processing system, Varon said.
“We've got a few different streams of things happening at once,” Varon said in an interview last month. “AI is improving the quality of deepfakes; [and] the growing adoption of AI tools is facilitating the creation of them,” she said. “And even though the main vector for fraud, with deepfakes, will be social engineering, it can and will impact payment authentication as well.”
Password practices evolve
That’s happening as passwords and passcodes are increasingly seen as ineffective, Varon noted. So, as the industry tries to move toward a future without passwords – historically seen as the best way to authenticate users – it’s grappling with how best to verify users.
That issue is exacerbated by the notion of agentic commerce because it entails shopping and payments by formerly suspect bots that now are empowered to complete transactions on consumers’ behalf.
“There's a lot of excitement about agentic AI and how it might apply it to payments, but there's also a lot of concern about how it might be taken advantage of, just as deepfakes have taken advantage of generative AI's ability to process voice and image and text,” Tapling said.
Biometrics may bolster defenses
Varon predicted biometric authentication will help bridge the gap, as more service providers and consumers embrace FIDO passkeys, which are Fast Identity Online standards linked to cryptographic keys tied to particular devices that are accessed by fingerprints, face scans or other such physical attributes.
AI is also bolstering anti-fraud defenses by spotting fraud. “Artificial intelligence is really allowing technology providers and institutions to leverage this new technology and use it in a way that's very positive for identifying fraud,” Parsons said.
Tapling noted the same, saying AI is used against fraud both on the front lines, interacting with consumers, and also in the back office. He pointed to the New York company Reality Defender using AI to detect deepfakes. “We're now using AI to identify AI,” he said. “They've built an AI model that senses, ‘is the video that I'm looking at a deepfake?’”
Companies are also employing AI, with help from financial crime software companies like Unit 21, to improve tracking of fraudulent transactions, Tapling said. “There's all kinds of ways that we can use AI, both at the time of a transaction event, to try and give me more signals or enrich the signals that I have, but also at the back end,” he said.
Fighting push-payments scams
One of the most pernicious frauds to gain traction in recent years is push-payment fraud, those instances where criminals hoodwink consumers and businesses into voluntarily sending money to crooked accounts. It’s been particularly difficult to fight because of the voluntary nature of the money transfers.
It has attracted the attention of lawmakers as stories of victims facing major losses in such scams have proliferated.
Organizations have sought to educate the public about such snares and companies have acted to upgrade systems and tools to confront such swindling. Concerns have climbed as speedier payments systems have become available.
Since the Federal Reserve launched its instant payments system FedNow in 2023, users have called for such increased capabilities. The Fed said this month that it’s working on anti-fraud upgrades to FedNow.
The central bank has added a feature that gives FedNow users “the ability to verify a beneficiary name associated with the account of an intended payment,” according to a Jan. 15 press release.
The central bank also has a FedNow pilot program underway that will let banks and credit unions that use the system “pre-check” a receiver account before sending a payment over the real-time system, according to the release.
Sharing information to fight fraud
Apprehension among financial institutions about the exchange of information – whether due to privacy laws, competitive instincts or other considerations – has long hampered communication and cooperation that many believe could better thwart payments fraud.
Fed Vice Chair for Supervision Michelle Bowman pointed to the barriers standing in the way of a stronger collective financial institution front in the battle against fraud.
“When banks share the latest information about fraud prevention among themselves – if some of the data is currently classified as [confidential supervisory information] – the disclosure can be prohibited, even if sharing it could make all banks more resilient to emerging fraud risks,” she said Jan. 7 in California.
She suggested in that speech, which was focused on modernizing regulations, that the federal government needs to change its approach to better address fraud. The Fed is reviewing how to better define instances where such information might be shared.
Verafin also attempts to help financial institutions share intelligence to fight fraud.
“The primary way that we see for financial institutions to prevent this type of fraud [via individuals] is by leveraging the intelligence across networks, understanding what's happening, not only within their institution, but outside of their institution, and working together to prevent financial crime,” Parsons said.