The holidays are fast approaching, and the “elves” are busy at the North Pole. No, not the presidential candidates. No, not the Capitol Hill pols. And no, not those unrelenting pursuers of objectivity and truth: the journalists.
I refer instead to the bureaucrats, in particular those implementing the new “comparative effectiveness review” (CER) process for comparing alternative treatments for given medical conditions.
The 2010 Patient Protection and Affordable Care Act — aka Obamacare — established the Patient-Centered Outcomes Research Institute to “conduct research to provide information about the best available evidence to help patients and their health care providers make more informed decisions.” What could be wrong with that? CER is supposed to be “a rigorous evaluation of the impact of different options that are available for treating a given medical condition for a particular set of patients.”
Alas, there is a problem: The federal government does not have patients. Instead, it has interest groups engaged in a long twilight struggle over shares of the federal budget pie. Less for one group means more for others, and even modest reductions in the huge federal health-care budget are a tempting goal for other constituencies.
In other words, there can be no such thing as unpoliticized science in the Beltway. It is inevitable that political pressures will lead policymakers to use the findings yielded by CER analyses to influence decisions on coverage, reimbursement, or incentives within Medicare, Medicaid, and other federal health programs.
Consider the new environment confronting would-be investors in new and improved medical technologies, examples of which are pharmaceuticals and medical devices and equipment. One cannot know in advance either how CER analyses of interest will turn out or how the findings will be used. Indeed, the uncertainties are enormous. The findings of statistical analyses are driven in substantial part by the design of the underlying studies.
Such studies always will conflict to some degree, introducing considerable subjectivity into the process of deriving “conclusions” from the CER process. Even for a given study, experts inevitably will differ on conclusions to be learned and/or recommendations to be made. More important, the process of scientific discovery is dynamic: Later findings can call earlier findings into question, and CER analysis necessarily will find itself “behind the curve” as medical technologies and treatment protocols evolve over time. And what is true for a population may not be true for a given subset of patients, a problem for which such top-down approaches as CER are particularly ill-conceived.
As an example, let us recall the experience of the “ALLHAT” (The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack) clinical trial, conducted over the eight-year period from 1994 through 2002. ALLHAT was a large (over 42,000 patients) and well-publicized comparative analysis of four alternative hypertension drugs, as well as the effects of lipid drugs, on the rates of heart attacks, strokes, and early deaths. Substantial disagreement emerged in the scientific literature over the design of the trial, the interpretation of the data, the importance of observed side effects, and a number of other parameters. Other CER analyses suggested differing conclusions. As the end of the ALLHAT study approached, new drugs (in particular the statin class of cholesterol drugs) and drug combination therapies reduced somewhat the usefulness of the ALLHAT findings, and there is little evidence in the literature that the ALLHAT trial has had an appreciable effect upon clinical practice.