Employing comparative effectiveness research—determining which medical treatments are most effective—is one of the means the Obama administration says government can reduce health care spending. If government pays only for treatments that are most effective, the theory goes, then it will save money.
I’ve been skeptical about comparative effectiveness research. In my July 12 Examiner column, I wrote, “But comparative effectiveness research is, if not junk science, not a fully developed intellectual exercise. Medicine is an art as well as a science, and comparative effectiveness research may too often compare apples and oranges.” In response, an email correspondent wrote, “More generally, the entire concept of ‘comparative effectiveness’ goes against the cutting edge of biomedical research. Evidence mounts daily that humans are far more individualized biologically than previously believed. . . different ethnic groups, age cohorts respond to drugs in ways almost as marked as disparate genders. Comparative effectiveness testing given just those variables quickly becomes more expensive than any possible realized savings. In short, ‘comparative effectiveness’ is sloppy, shortcut thinking that ignores reality in an attempt to end debate, rather than struggle with the difficult question of how far we individuate treatment.”
Today in the Wall Street Journal we have testimony to the same effect from two individuals far more expert than me or my email correspondent—Dr. Jerome Groopman and Dr. Pamela Hartzband of the faculty of Harvard Medical School. Dr. Groopman is also a staff writer for the New Yorker. They write:
“But once we leave safety measures and emergency therapies where patients have scant say, what is ‘the right thing’? Data from clinical studies provide averages from populations and may not apply to individual patients. Clinical studies routinely exclude patients with more than one medical condition and often the elderly or people on multiple medications. Conclusions about what works and what doesn’t work change much too quickly for policy makers to dictate clinical practice.
“An analysis from the Ottawa Health Research Institute published in the Annals of Internal Medicine in 2007 reveals how long it takes for conclusions derived from clinical studies about drugs, devices and procedures to become outdated. Within one year, 15 of 100 recommendations based on the “best evidence” had to be significantly reversed; within two years, 23 were reversed, and at 5 1/2 years, half were contradicted. Americans have witnessed these reversals firsthand as firm ‘expert’ recommendations about the benefits of estrogen replacement therapy for postmenopausal women, low fat diets for obesity, and tight control of blood sugar were overturned.”
The idea that we can standardize medical treatments, so that health care operates with the mass efficiency of an assembly line at one of the old Big Three auto company plants, seems to be a delusion. There’s a reason that most of us are not only not physicians, but not capable of becoming physicians. There’s a reason it takes four years for physicians to get their medical degrees and that they typically need four or more years of post-degree training after that. Comparative effectiveness research may very well be useful. But to standardize medical treatment on the basis of comparative effectiveness research seems like the height of folly.

