As fraud explodes across digital channels, identity verification has become a cornerstone of the government’s anti-fraud strategy. Last month, the White House’s Executive Order 14390 put digital identity at the center of federal cybersecurity efforts. Industry players like Socure are now calling for digital identity to be treated as “foundational national infrastructure.” But as the government races to strengthen its identity systems, a critical question is getting overlooked: not all identity verification is built the same, and some of the most widely deployed systems may be the most vulnerable.
There are two dominant approaches to verifying who you are online. Biometric identity verification confirms your identity through something unique to your body, like a facial scan, fingerprint, or iris pattern. AI-powered identity verification, by contrast, scores your identity against a vast data graph, pulling signals from your phone, email, device, location history, credit behavior, and behavioral patterns, without ever asking you to take a selfie.
Companies like Socure, LexisNexis Risk Solutions, and TransUnion have built their platforms on this data-graph model. It is fast, frictionless, and processes billions of verifications annually. It is also built on the same commercial data pipelines that regulators have spent years trying to rein in.
And in an age of AI-enabled fraud, that creates a compounding problem. The more sophisticated AI becomes as an attack tool, the easier it is to synthesize the exact signals these systems are trained to trust: email addresses, phone numbers, device fingerprints, behavioral patterns. All of it can be faked, scraped, or purchased on the dark web.
For consumers, the stakes are real. These systems are only as secure as the weakest breach in that ecosystem. AI-first identity verification does not make you safer. It just makes more of your data available to the people trying to exploit it.