The lofty goal of evidence-based medicine: Evidence-based medicine became the mantra following the introduction of the term by David M. Eddy during the 1990s in publications in New England Journal of Medicine and Health Affairs. The idea is that research evidence rather than clinical hunch should guide the behavior of physicians. Guidelines for treating particular conditions have been developed based on the available findings from published studies. Medical schools base their training on guidelines and published studies.
Unfortunately, the term “evidence-based medicine” has become a mechanism for reassuring physicians that they are doing the right thing. The problem is that the edifice of the evidence is rigged to allow cheating. A variety of factors aggregate to create a system which ensures that the “evidence” should not be believed.
Believing the Positive Findings While Ignoring the Negative: In teaching research and statistics, an initial lesson includes a discussion of Type 1 Errors. In any test of statistical significance, researchers know how many times they will be wrong in concluding that a particular treatment was effective. The current standard is accepting wrong conclusions 5% of the time. (That is, in believing the results of your statistically significant test, you will be making a Type 1 error, 5% of the time.) The researcher knows that if a research study involves examining multiple measures, then that “willingness to be wrong 5% of the time” does not accurately describe the probabilities. One has to set a much tougher standard (a “p” value much smaller than 0.05) when making multiple tests of significance. We call this correcting for alpha-inflation. The problem with the current system for approving drugs is set up to make Type 1 errors. In approving drugs in psychiatry, the FDA does not aggregate across studies submitted by a drug company. The FDA bases approval on two positive trials. They ignore all the negative trials. When Irving Kirsch used the Freedom of Information Act to obtain all the trials on antidepressants from the FDA, he found that only for the very severely depressed do the drugs work better than a placebo. In the US, the bulk of antidepressants are prescribed for the less-severely depressed: the group for which the drugs are no better than placebo. Thus, in the US, we are spending a lot of money for placebos and more problematically, these placebos have plenty of very negative side effects (see Chapter 4 in Neuroscience for Psychologists and Other Mental Health Professionals for full details.)
The Medical Journals Are Also Set Up to Make Type 1 Errors: Erick Turner and Ben Goldacre, among others, have drawn attention to the fact that positive studies are much more likely to be published than negative studies. Indeed, the results of positive studies are often published multiple times, while the studies not supporting efficacy are buried. Thus, reading the literature, the source of information for most doctors, will yield a very biased perception of reality. Sometimes big meta-analyses reviewing all the published studies are conducted. But, again, when access to all the data is not available, including unpublished studies, there is little reason to believe aggregated findings. Unfortunately, pharmaceutical houses, which fund most of the trials of drugs in the US, are under no obligation to publish negative findings. There is no law against suppressing the truth.
The Ways to Put Lipstick on a Pig. Erick Turner, in the video “How publication bias corrupts the evidence base for psychiatric drugs” available through madinamerica.com, discusses the ways in which pharmaceutical trials can make negative trials of a drug look like support for drug efficacy. (Erick Turner knows where of he speaks. He was employed at the FDA but now teaches ethics at Oregon Health and Sciences University.) Basically, the ways of “putting lipstick on a pig” are variations on the theme of scan the data after the study to find the Type 1 errors, or, stated alternatively, see where you can capitalize on chance findings. The following strategies are popular favorites:
- if the outcome is not supported at the planned study ending, scan for a point in time when the outcome supported drug efficacy and publish a study making it appear that the drug-favorable time point had been the planned time point for evaluating the outcome
- If the planned outcome is not supportive, look for a minor measure that might be supportive and talk about this finding as if it were the planned major variable
- slice the subjects into subgroups and see if there is some subgroup for which the drug worked
- if four study sites were included in the study, see if things look better at a particular study site and only discuss the site which was positive
- everyone knows that antidepressants and antipsychotics have rather severe withdrawal phenomenon, so in your study of drug efficacy, use subjects who had been medicated and then are pulled off the drug; when you resupply subjects with the drug, you will relieve withdrawal symptoms and the drug will appear efficacious
- only count those subjects who have completed the trial and ignore all those subjects who left the trial because the drug wasn’t working
In the video “How publication bias corrupts the evidence base for psychiatric drugs” Erick Turner compares findings as reported to the FDA with the findings from the same studies published in journals. By using the clever strategies, the negative studies as reported to the FDA suddenly appear to support drug efficacy once the drug companies massage the data. In the publications, the source for educating physicians, it looks like the evidence in support of the drug is strong.
Registration of Clinical Trials. As Erick Turner points out the proper way to evaluate the findings from a study is to know the specific hypothesis that is being tested. The outcome which is expected to be changed by the drug at a particular time point must be agreed upon before the study begins. Scanning the data after the fact for anything that might support the general hypothesis is capitalizing on chance. In 2007, the FDA began requiring drug companies to register their trials of various drugs for which approval applications had been made to the FDA. The International Committee of Medical Journal Editors pledged to only publish those studies that had been properly registered. The process of registration requires that the protocol for the study be provided. With specification of which measures will be evaluated at which points in time, a way to check whether a researcher is guilty of capitalizing on chance is available. It is possible for interested people to go to the FDA website to determine the design of the trial. Presumably the findings from the study are to be provided to the FDA within a year following completion and are to be publicly available. Any doctor can then go to the Clinicaltrials.gov website and access the information to do his/her own evaluation of the drug’s efficacy. (Turner has published directions for doctors on how to navigate the ClinicalTrials website.) However, the FDA is reluctant to provide the raw data making it difficult for interested persons to analyze the raw data themselves.
Compliance with Clinical Trials Registration: While the Clinical Trials web site is a very important step in the right direction, it won’t help everything. Post drug approval studies on a drug are not registered. The data from studies before the change in policy are not available. Moreover, Scott, Rucklidge, and Mulder (2015) evaluated the degree of compliance by journal editors with the new mandates. Only 33.1% of published studies were prospectively registered with clearly defined outcomes. Ben Goldacre in his book and Tedtalks also concludes that medical journal compliance with commitments to publish only the results of registered studies has not been good. While capitalizing on chance has been particularly widespread in psychiatry, McGauran et al. (2010) note that the lack of truth telling is not limited to psychiatry but occurs throughout various areas of medicine.
Questions Not Asked by the FDA. The FDA’s function is to evaluate drugs for safety and efficacy. When procedures and standards were promulgated at the FDA years ago, most of the available drugs were assumed to be taken like antibiotics: use them for a short period of time until the condition resolves and then stop. Times have changed. Many Americans take medications daily for years at a time. For psychotropic drugs, people take the drugs for decades. The FDA evaluates psychotropic medications for 8 weeks. They don’t collect data on efficacy after 8 weeks. While doctors are supposed to report adverse events (side effects), it is estimated that only 1-10% of adverse events are actually reported (See Chapter 3 of Neuroscience for Psychologist and Mental Health Professionals). Particularly for drugs that are taken daily for extended periods, the long term impact on the body is not known at the time of drug approval. Moreover, no evaluation of the difficulty of withdrawing from a drug is required by the FDA. (Withdrawal process from antidepressant medications is rather severe.) Certainly, information about long term efficacy, about the long-term impact on the body, and about the difficulty of drug discontinuation might be very important pieces of information to have for both physicians and patients when deciding whether to initiate drug use. This information is just not available when drugs are released onto the market.
Effective System Change? Given the current system, I wonder how the broken system in medicine in the US will ever be fixed. Evidence-based medicine is a good thing, but that is not the system we have. Speilmans and Parry (2010) suggest that our current system is marketing-based medicine, not evidence- based medicine. Erick Turner suggests that individual doctors can go to the clinical trials web site to evaluate drugs for themselves. Perhaps some will. We can wait for someone like Irving Kirsch to evaluate all the FDA data to evaluate efficacy. But Kirsch is then fighting against all the “noise” in the literature on the published studies. I wonder whether America’s escalating medical costs and the fact that medical care consumes more of GDP in the US than in any other country in the world might be a force for demanding a more truthful system. Surely those who worry about the National Debt will want to ensure that their dollars are not being wasted. Perhaps it will be the states, which have to balance their budgets, whose Medicaid/medicare panels might demand honest evaluations of which treatments are safe and effective. I have faith that in the long run, the truth will win. However, as John Maynard Keynes said, “in the long run, we’re all dead.”
Ghaemi, S. N. (2009). The failure to know what isn’t known: negative publication bias with lamotrigine and a glimpse inside peer review. Evidence Based Mental Health, 12, (3), 65-68.
Goldacre, B. (2012). Bad pharma: how drug companies mislead doctors and harm patients. New York: Faber & Faber.
Kirsch, I. (2010). The emperor’s new drugs: exploding the antidepressant myth. New York, NY: Basic Books.
McGauran, N., Wieseler, B., Kreis, J. Schuler, Y-B., Kolsch, H., & Kaiser, T. (2010). Reporting bias in medical research—a narrative review. Trials, 11, 37.
Melander, H., Ahlqvist-Rastad, J., Meijer, G., & Beermann, B. (2003). Evidence b(i)ased medicine—selective reporting from studies sponsored by pharmaceutical industry: review of studies in new drug applications. British Medical Journal, 326, 1171.
Scott, A., Rucklidge, J. J., & Mulder, R. T. (2015). Is mandatory prospective trial registration working to prevent publication of unregistered trials and selective outcome reporting? An observational study of five psychiatry journals that mandate prospective clinical trial registration. PLoS One, DOI: 10.1371/journal.pone.0133718.
Spielmans, G. I., & Parry, P. I. (2010). From evidence-based medicine to marketing-based medicine: evidence from internal industry documents. Biomedical Inquiry, DOI 10.1007/s11673-010-9208-8
Turner, E. H. (2013). How to access and process FDA drug approval packages for use in research. British Medical Journal, 347, 15992.
Turner, E. H., Matthews, A. M., Lindardatos, E., Tell, R. A., & Rosenthal, R. (2008). Selective publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine, 358 (3), 252-260.