Insights Thought Leadership

Does Your Healthcare Information Management System Mitigate Medical Record Duplication?

Medical records

Categories:

Healthcare

It has long been conventional wisdom in the healthcare community that roughly 10 percent of patient records are duplicates. ARGO’s ongoing research indicates that the problem is even more severe, however. Our analysis of nearly 70 hospitals finds a median duplicate rate of 10% (half are lower than 10%, half higher), but an average of a whopping 19%. In other words, organizations with healthcare information management issues tend to have big issues.  

In a recent post I discussed the various ways such duplicate records are introduced into systems, and proven strategies to right the ship.  Beyond the obvious benefits of improved customer service and the reduced risk of clinical impact, let’s dig into the costs imposed on healthcare entities by duplicate records.

More Entry Points, More Errors

It’s likely that the wave of industry consolidation has contributed to the upward spiral in dupe rates. As healthcare databases are combined, the rudimentary matching algorithms built into most healthcare information management systems typically catch only 20 percent of redundant records. Even intermediate rules-based solutions, when applied to database merges, identify only half of the duplicates. The increasing use of electronic data exchange between care organizations has likely introduced more errors as well, given even less consistency of data fields and data capture processes.

A lack of accurate records is an obvious factor in denied or delayed reimbursement claims. A 2016 Ponemon Institute study documented an average of $17.4 million in potential lost revenue for the top-tier hospitals it researched. Others have found significant reduction in receivable days outstanding and improved collection rates resulting from the removal of duplicate records. This is entirely logical when one considers the elimination of patient/insurance company confusion or objections.

Separate studies have pinned the administrative cost of resolving a single duplicate at as much as $100 per record- reflecting the time and effort to research and correct these entries in real time. Another critical factor- albeit one that has not been formally quantified- is the burden of lacking accurate and necessary patient health information at the “moment of truth,” which in itself triggers real-time fire drills.

My colleague Shwetal Covert and I go into greater detail on the topic in this article for Health Data Management. (note: readers must complete a free registration process in order to access the full article) 

The Bar Is Getting Higher- Much Higher

If the internal benefits of reducing duplicate records aren’t enough to spur an organization into action, The Office of the National Coordinator for Health Information Technology (ONC) may provide additional incentive. The ONC has published error goals of 2% by 2017, 0.5% by 2020 and 0.01% by 2024. Given that the vast majority of organizations remain short of the 2017 goal, the far more stringent 2020 standard appears to be a daunting challenge.

Although there is no mandate or enforcement mechanism attached to these goals at present, it’s not hard to envision the government taking a more aggressive approach absent demonstrable industry progress. ARGO’s advanced search/match algorithms, leveraging machine learning and artificial intelligence, have been shown to identify 99.5% of potentially duplicate records.

Some of these records will prove not to be dupes, but ARGO’s software equips the hospital or healthcare organization with a clear path to better healthcare information management and a roadmap for meeting the ONC’s 2020 threshold. And given the ongoing advancements in machine learning and ARGO’s dedication to the science of continuous improvement, that 2024 target may prove more attainable than it appears on the surface.