In the American mindset, more is typically considered to be better. This concept extends to the growing field of big data- as computers can efficiently process ever more information, feeding a vast array of data points into an analytics engine can generate a host of unexpected correlations. It’s the job of humans to then determine which of these correlations represent causation, and potentially meaningful insights that carry predictive value.
This same logic applies to the healthcare field. On one hand, most healthcare organizations aim to serve as many patients as possible. They also aim to gather as much information on those patients as possible in order to optimize the quality of care provided. Such data allows them to flag known conditions and interactions as well as creates the opportunity to anticipate looming issues.
The pursuit of more health data is worthwhile if it is good data. Unfortunately, it’s a documented fact that duplicate records are a persistent problem across most organizations. Multiple studies have shown that roughly 10 percent of all patient records are duplicates. This situation is likely getting worse given the ongoing wave of industry consolidation and the growth of urgent care centers as an additional point of entry.
Perhaps that’s why ARGO’s recent research finds many duplicate rates in the 16-19 percent range. The root cause of this is frequently a well-considered, patient-centric decision to move as quickly as possible through the administrative process and proceed with needed care. This often entails creating a new record rather than stopping to research discrepancies with existing ones.
Things get even trickier when the time for back-end cleanup arrives. The task of scrubbing databases to locate and identify duplicate records rarely tops anyone’s to-do list and requires precious bandwidth in high demand to address other priorities.
Quality over Quantity
The downside of allowing duplicate records to linger in healthcare systems should be apparent. They hamper efficiency, result in lost billing opportunities and most importantly, can compromise the quality of care. The additional volume they contribute to big data analysis is a detriment, as these duplicates distort the true frequency of certain situations, compromising the accuracy of resulting correlations.
The good news is that software-based solutions exist to help detect these dupes, as well as to prevent their entry into the system going forward. Hopefully in this case it’s clear that more records are not better, and that care organizations should eliminate any dis-incentives to correcting inaccuracies.