William A. Hyman
Professor Emeritus, Biomedical Engineering
Texas A&M University, w-hyman@tamu.edu
Read other articles by this author
MIPS relies on the calculation of a composite (or final) score based on the linear combination of 4 factors (except 3 for 2017), each multiplied by its corresponding weight as explained in the more than 2000-page Final Rule. Notwithstanding that it is a Final Rule, comments are requested. The equation for 2017 is CS = (factor1 x weight1 + factor2 x weight2 + factor3 x weight3). (There may be a multiplier of 100 to bring the score back to 0-100 depending on how the factors and weights are given.) The factors and their weights for 2017 are Quality (60%), Improvement Activities (15%), and Advancing Care Information (25%). For 2018 and beyond (or as long as this methodology lasts) Cost is the 4th factor and the weights are revised to reflect 4 factors instead of 3.
Before we even get to the calculations, we should note that the component parts of the factors are not the same for each provider because the providers get to pick the pieces of each factor that will be considered. For example, most participants will report 6 Quality measures selected from a list of 271. There are up to 11 Advancing Care factors and 4 Improvement activities selected from 93. The meaning of this is that different providers can select and report widely different measures and then none-the-less be compared to each other. Moreover, providers who are paying attention will select those measures for which they know they are doing well, and avoid measures which they do badly. This might be a like a best cook contest in which a baker can bake a pie and a BBQ champ can make brisket and then the judges will compare the pie to the brisket.
The equation is also rich with opaque policy decisions, assumptions, and intended and perhaps unintended consequences of the mathematical manipulations. The first policy component is simply that there are precisely 3 (or 4) parameters that matter. The second is their relative weights. Since changing the weights changes the outcome for an identical set of factor scores, these weights are significant and yet fairly arbitrary. A third broader concept is that the weighted factors are to be linearly added. This idea has no underlying basis and the three component scores could be combined in some other way. A consequence of the addition method is that high performance in one category can offset lower performance in another. Thus, providers with identical final scores will have different underlying performance in the factors, and in the components of the factors. Also contained within the equation is that the other factors can offset Quality. Notably someone who has lower Quality but is doing better on Improvement can outscore someone with higher Quality. This will be particularly intriguing when Cost becomes in a factor in 2018, ie a lower cost lower quality provider could come out better than a higher cost higher quality provider. Lower cost itself is good for the payor, but not necessarily for the patient. But of course, those of us that pay taxes are all the underlying payors which leads to the common belief that healthcare costs should be tightly controlled except when it is me that is consuming healthcare at which point cost should be no object. This is a special case of the general proposition of being against government spending or tax loopholes except when you are the beneficiary.
Let’s look at some examples using the three factor 2017 scheme. Suppose Provider A has the scores 70, 50, and 50 in Quality, Improvement, and Advancing Care, with a resulting composite score of 62. Provider B has a 50, 80, 80 which is also a 62. Is their performance equal such that they should earn an equal bonus, or both be denied? If provider C has a 5, 8, 9 for a 64.5, is this really superior performance despite the lower quality?
Another problem with the entire concept is that a provider can’t be superior unless another is inferior if the bonus is taken from those at the bottom to give to those at the top. Thus the performance measure is inherently relative and is not determined only by one’s own score. I am reminded here of annual raise time during my department head days when I was given a fixed amount of money to distribute. If everyone was the same, no matter how good they were, everyone would get an average raise. If someone complained to me about getting an average raise I would ask “Whose raise do you want me to reduce so that you can have more?” Of course, some were more than willing to tell me.