Cyberonics reneged on its "Lifetime Reimbursement Guarantee". Click on the image to learn how you can help...

Monday, January 24, 2011

Why Almost Everything You Hear About Medicine Is Wrong




If you follow the news about health research, you risk whiplash. First garlic
lowers bad cholesterol, then—after more study—it doesn’t. Hormone replacement
reduces the risk of heart disease in postmenopausal women, until a huge study
finds that it doesn’t (and that it raises the risk of breast cancer to boot).
Eating a big breakfast cuts your total daily calories, or not—as a study
released last week finds. Yet even if biomedical research can be a fickle guide,
we rely on it.


But what if wrong answers aren’t the exception but the rule? More and more
scholars who scrutinize health research are now making that claim. It isn’t just
an individual study here and there that’s flawed, they charge. Instead, the very
framework of medical investigation may be off-kilter, leading time and again to
findings that are at best unproved and at worst dangerously wrong. The result is
a system that leads patients and physicians astray—spurring often costly
regimens that won’t help and may even harm you.

It’s a disturbing view, with huge im-plications for doctors, policymakers,
and health-conscious consumers. And one of its foremost advocates, Dr. John P.A.
Ioannidis, has just ascended to a new, prominent platform after years of
crusading against the baseless health and medical claims. As the new chief of
Stanford University’s Prevention Research Center, Ioannidis is cementing his
role as one of medicine’s top mythbusters. “People are being hurt and even
dying” because of false medical claims, he says: not quackery, but errors in
medical research.



This is Ioannidis’s moment. As medical costs hamper the economy and impede
deficit-reduction efforts, policymakers and businesses are desperate to cut them
without sacrificing sick people. One no-brainer solution is to use and pay for
only treatments that work. But if Ioannidis is right, most biomedical studies
are wrong.



In just the last two months, two pillars of preventive medicine fell. A major
study concluded there’s no good evidence that statins (drugs like Lipitor and
Crestor) help people with no history of heart disease. The study, by the
Cochrane Collaboration, a global consortium of biomedical experts, was based on
an evaluation of 14 individual trials with 34,272 patients. Cost of statins:
more than $20 billion per year, of which half may be unnecessary. (Pfizer, which
makes Lipitor, responds in part that “managing cardiovascular disease risk
factors is complicated”). In November a panel of the Institute of Medicine
concluded that having a blood test for vitamin D is pointless: almost everyone
has enough D for bone health (20 nanograms per milliliter) without taking
supplements or calcium pills. Cost of vitamin D: $425 million per
year.





Ioannidis, 45, didn’t set out to slay medical myths. A child prodigy (he was
calculating decimals at age 3 and wrote a book of poetry at 8), he graduated
first in his class from the University of Athens Medical School, did a residency
at Harvard, oversaw AIDS clinical trials at the National Institutes of Health in
the mid-1990s, and chaired the department of epidemiology at Greece’s University
of Ioannina School of Medicine. But at NIH Ioannidis had an epiphany. “Positive”
drug trials, which find that a treatment is effective, and “negative” trials, in
which a drug fails, take the same amount of time to conduct. “But negative
trials took an extra two to four years to be published,” he noticed. “Negative
results sit in a file drawer, or the trial keeps going in hopes the results turn
positive.” With billions of dollars on the line, companies are loath to declare
a new drug ineffective. As a result of the lag in publishing negative studies,
patients receive a treatment that is actually ineffective. That made Ioannidis
wonder, how many biomedical studies are wrong?



His answer, in a 2005 paper: “the majority.” From clinical trials of new
drugs to cutting-edge genetics, biomedical research is riddled with incorrect
findings, he argued. Ioannidis deployed an abstruse mathematical argument to
prove this, which some critics have questioned. “I do agree that many claims are
far more tenuous than is generally appreciated, but to ‘prove’ that most are
false, in all areas of medicine, one needs a different statistical model and
more empirical evidence than Ioannidis uses,” says biostatistician Steven
Goodman of Johns Hopkins, who worries that the most-research-is-wrong claim
“could promote an unhealthy skepticism about medical research, which is being
used to fuel anti-science fervor.”



Even a cursory glance at medical journals shows that once heralded studies
keep falling by the wayside. Two 1993 studies concluded that vitamin E prevents
cardiovascular disease; that claim was overturned by more rigorous experiments,
in 1996 and 2000. A 1996 study concluding that estrogen therapy reduces older
women’s risk of Alzheimer’s was overturned in 2004. Numerous studies concluding
that popular antidepressants work by altering brain chemistry have now been
contradicted (the drugs help with mild and moderate depression, when they work
at all, through a placebo effect), as has research claiming that early cancer
detection (through, say, PSA tests) invariably saves lives. The list goes
on.



Despite the explosive nature of his charges, Ioannidis has collaborated with
some 1,500 other scientists, and Stanford, epitome of the establishment, hired
him in August to run the preventive-medicine center. “The core of medicine is
getting evidence that guides decision making for patients and doctors,” says
Ralph Horwitz, chairman of the department of medicine at Stanford. “John has
been the foremost innovative thinker about biomedical evidence, so he was a
natural for us.”



Ioannidis’s first targets were shoddy statistics used in early genome
studies. Scientists would test one or a few genes at a time for links to
virtually every disease they could think of. That just about ensured they would
get “hits” by chance alone. When he began marching through the genetics
literature, it was like Sherman laying waste to Georgia: most of these candidate
genes could not be verified. The claim that variants of the vitamin D–receptor
gene explain three quarters of the risk of osteoporosis? Wrong, he and
colleagues proved in 2006: the variants have no effect on osteoporosis. That
scores of genes identified by the National Human Genome Research Institute can
be used to predict cardiovascular disease? No (2009). That six gene variants
raise the risk of Parkinson’s disease? No (2010). Yet claims that gene X raises
the risk of disease Y contaminate the scientific literature, affecting personal
health decisions and sustaining the personal genome-testing
industry.



Statistical flukes also plague epidemiology, in which researchers look for
links between health and the environment, including how people behave and what
they eat. A study might ask whether coffee raises the risk of joint pain, or
headaches, or gallbladder disease, or hundreds of other ills. “When you do
thousands of tests, statistics says you’ll have some false winners,” says
Ioannidis. Drug companies make a mint on such dicey statistics. By testing an
approved drug for other uses, they get hits by chance, “and doctors use that as
the basis to prescribe the drug for this new use. I think that’s wrong.” Even
when a claim is disproved, it hangs around like a deadbeat renter you can’t
evict. Years after the claim that vitamin E prevents heart disease had been
overturned, half the scientific papers mentioning it cast it as true, Ioannidis
found in 2007.



The situation isn’t hopeless. Geneticists have mostly mended their ways,
tightening statistical criteria, but other fields still need to clean house,
Ioannidis says. Surgical practices, for instance, have not been tested to nearly
the extent that medications have. “I wouldn’t be surprised if a large proportion
of surgical practice is based on thin air, and [claims for effectiveness] would
evaporate if we studied them closely,” Ioannidis says. That would also save
billions of dollars. George Lundberg, former editor of The Journal of the
American Medical Association
, estimates that strictly applying criteria like
Ioannidis pushes would save $700 billion to $1 trillion a year in U.S.
health-care spending.



Of course, not all conventional health wisdom is wrong. Smoking kills, being
morbidly obese or severely underweight makes you more likely to die before your
time, processed meat raises the risk of some cancers, and controlling blood
pressure reduces the risk of stroke. The upshot for consumers: medical wisdom
that has stood the test of time—and large, randomized, controlled trials—is more
likely to be right than the latest news flash about a single food or
drug.

http://www.newsweek.com/2011/01/23/why-almost-everything-you-hear-about-medicine-is-wrong.html

No comments:

Post a Comment