By Paul Bradley, Chief Data Scientist, ZirMed
Twitter: @Zirmed
The reasons claims are denied are so varied that managing denials can feel like chasing a thousand different tails. This situation is not surprising given that a hypothetical denial rate of just 5 percent translates to tens of thousands of denied claims per year for large hospitals—where real-world denial rates often range from 12 to 22 percent.
The massive scale of claims data and the large number of individual denial-related data points make finding meaningful—and non-obvious—correlations all but impossible without advanced statistical and predictive algorithms. But by leveraging such algorithms to build out predictive mathematical models, it’s possible to detect patterns and identify specific pockets of claims that will almost certainly be denied, and then catch those claims before they go out the door.
Automation enables sorting and filtering of those claims based on institution-specific thresholds for financial, operational, and clinical impact, enabling staff to focus on claims and accounts where the impact will be greatest. Efficiency also can be improved by “batching” work to the appropriate expert or team, who can then correct the issue and identify process improvements that will prevent it from recurring.
Simply put, predictive modeling makes it possible to identify discrete pockets of claims that have a high likelihood of being denied (for example, 98 percent likelihood of denial or higher). A simple case example can provide insight into how best to identify such pockets of opportunity.
Background. A denials analysis was performed for a hospital system that had a denial rate of about 16 percent. The analysis incorporated the insurance code (i.e., the payer), patient class (e.g., inpatient, outpatient, emergency department visit, etc.), and thousands of other claim, payer, and patient data points potentially relevant to denials and appeals management. The predictive modeling algorithms sifted through these data points across more than 500,000 historical claims looking for patterns, correlations, and meaningful anomalies.
Sample finding. As an example, the algorithms were able to establish the following logical connections:
- If the primary insurance on the account is “Medicare Part B outpatient,” the likelihood of denial increases from 16 percent to approximately 20 percent.
- If the primary insurance on the account is “Medicare Part B outpatient” and total laboratory charges are less than $700, the likelihood of denial actually drops to 12 percent—because, in general, Medicare Part B outpatient claims with low laboratory charges are not denied.
- If, however, the primary insurance on the account is “Medicare Part B outpatient,” total laboratory charges are less than $700, and the primary ICD-9 Procedure Code is “94.94: Other Counseling,” then the probability of denial skyrockets to 99 percent.
Summary of impact. These findings were summarized by the following simple rule:
Primary Insurance Code = ‘Medicare Part B outpatient’ and Total Lab Charges ‹ $700 and Primary ICD-9 Procedure Code is ‘94.94: Other Counseling’
Over the set of more than 500,000 historical claims, the rule identified 780 claims that had a 99 percent probability of being denied. Moving forward, the hospital system could automatically flag claims exhibiting these traits for review before they were submitted to the payer, enabling a coder to investigate the ICD-9 procedure coding to determine whether a more applicable description could be added instead of code “94.94: Other Counseling.”
The application of predictive modeling to estimate the likelihood of denials has the potential to identify thousands of such pockets representing both high and low denial rates. Organizations can benefit from using predictive modeling regardless of whether they already have a program in place to monitor denials by payer or department, because this approach can support program efforts to detect correlations at many different levels, including code, claim, charge, patient, and provider levels.
Having access to data sets with high levels of integrity, richness, and granularity is of paramount importance to effective predictive modeling. Organizations that comprehensively aggregate clinical and financial data to realize these capabilities—and that have clear processes in place to ensure the consistency and integrity of these data—will find correlations across a wider data-set that will inform their financial and operational decisions regarding staffing, training, service lines, and payment contracts.
Paul Bradley is chief data scientist at ZirMed. This article was originally published on ZirMed and is republished here with permission.