The ongoing parade of data breach announcements, combined with consumers’ irresistible desire to share their secrets on social media, means that the growing scourge of synthetic identity fraud will likely get worse before it gets better. With ever more access to both personally identifiable information and random factoids, scammers are able to create personas for “people” who do not exist- or worse yet, for actual humans who have no idea accounts are being opened in their names.
I covered the basics of synthetic identity fraud in an earlier blog post. Fraudsters often concoct personas to resemble “thin file” customers. If banks become too aggressive in denying every marginal account or loan application, they could in the process disenfranchise the groups of people most in need of financial services- and a key engine for future growth of both the bank’s P&L and the economy overall.
Fortunately, the same computing advancements that enable this sort of fraud can be deployed to counteract it as well.
Is It True What They Say?
A common tactic in synthetic identity fraud is to establish an account based on stolen credentials, then to maintain it “properly” for a period of time- making required payments, and even qualifying for expanded credit lines. There’s a reason it’s called a “confidence game”- the fraudsters’ goal is to establish as much legitimacy as possible, then execute a “bust out” scam at a point that maximizes their ill-gotten gains.
Even the best fraudster, however, inevitably leaves behind some clues. Too often these are only discovered in hindsight, leaving the bank obsessing over why they didn’t notice the warning signs earlier. Ongoing, granular monitoring of accounts is the best way to flag these subtle slips before damage can be inflicted.
For instance, account activity should be consistent with the persona presented in the application. Perhaps spending and/or payment patterns don’t follow logical geographic or category footprints. Social media activity may also expose telltale signs, as could subsequent legitimate uses of a social security number or drivers license that don’t jibe with the context in which it was provided earlier- potentially indicating applicant data was lifted from an unwitting minor.
With Big Data Comes Big Responsibility
Banks must sift through massive amounts of data- originating across countless entry points, both internally and externally- on a continuous basis in order to make such determinations. Machine learning routines are available to shoulder the heavy lifting of this complex task, discerning patterns (both positive and negative) that would escape the naked eye, and self-adjusting to account for new patterns that develop over time.
Importantly, the idea is not to empower these algorithms to make fraudulent account decisions without human oversight- this would never pass muster with regulators, and would create a poor customer experience. Instead, machine learning should be deployed to unearth potential gems hidden within reams of information and pass them along to trained fraud experts for an actual determination.
Synthetic ID fraud is clearly on the rise – not only with purely fictional personas but also through leveraging the credentials of particularly vulnerable populations like youth and immigrants, the groups least likely to notice such malfeasance until the damage to their financial histories has already been inflicted. Through machine learning, banks have a wonderful opportunity to “do well by doing good,” improving their own P&Ls while protecting customers and keeping credit available to more consumers.
For more information, download our white paper, " Stop Fraud With Analytics."