Thursday, January 17, 2008

Efficacy Overstated for Antidepressants

By Crystal Phend
PORTLAND, Ore., Jan. 16 -- Publication bias may have cast the benefit of antidepressant medications in a better light than deserved, researchers said.A third of FDA-registered antidepressant trials were never published, found Erick H. Turner, M.D., of the Oregon Health and Science University and Portland VA Medical Center, and colleagues in a meta-analysis.Because unpublished studies were almost twice as likely to have negative findings as those that were published, effect sizes in the literature were artificially inflated by 11% to 69% for individual drugs and by 32% overall, they reported in the Jan. 17 issue of the New England Journal of Medicine.
"By altering the apparent risk-benefit ratio of drugs," they said, "selective publication can lead doctors to make inappropriate prescribing decisions that may not be in the best interest of their patients and, thus, the public health."
Studies have raised concerns about selective publication of safety issues with selective serotonin-reuptake inhibitors for depression in children, the researchers said.
To see whether these issues have an impact on efficacy as well, the researchers analyzed all phase II and III clinical-trial programs for 12 antidepressant agents approved by the FDA from 1987 through 2004.
They gathered hard copies of the FDA's statistical and medical reviews from Freedom of Information Act requests for eight older drugs. Reviews of the four newer antidepressants were available on an FDA Web site.
The researchers searched the literature for all FDA-registered randomized, double-blind, placebo-controlled studies of short-term antidepressant use, then contacted each drug sponsor's medical-information department if no publications were found.
Overall, 31% of the studies -- 23 of 74 -- were unpublished.
Sample size did not appear to be a factor in publication. Unreported trials were not substantially or significantly smaller than those that were published (median 146 patients versus 153, P=0.29).
However, outcome did appear to be a factor. Positive studies were about 11.7 times more likely to be published than negative or questionable studies (P<0.001).
All but one of the 38 trials deemed by the FDA as positive were published, whereas among the 49% of trials that were either negative (24 studies) or questionable (12), only three were published as not positive while the majority were not published (61%).
This publication bias resulted in 94% (95% confidence interval: 84% to 99%) of published studies appearing to have positive results, whereas according to the FDA, only 51% of trials done (95% CI: 39% to 63%) had positive results.
"Not only were positive results more likely to be published," Dr. Turner and colleagues said, "but studies that were not positive, in our opinion, were often published in a way that conveyed a positive outcome."
Published methods in 15% of studies were different than those prespecified to the FDA.
In every case, the study failed its protocol-specified primary outcome but its authors highlighted another positive finding as if it were the primary outcome. In nine cases, the nonsignificant pre-specified primary outcomes were omitted entirely.
The publication bias also had an impact on effect-sizes for the antidepressants, both overall (P=0.012) and individually (P<0.001).
Although the efficacy of antidepressants remained significant regardless of whether unpublished studies were considered, the overall mean weighted effect-size dropped from 0.41 (95% CI: 0.36 to 0.45) to 0.31 (95% CI: 0.27 to 0.35) when the unpublished studies were included.
The findings were limited by restriction to industry-sponsored trials registered with the FDA and to issues of efficacy rather than "real-world" effectiveness, the researchers noted.
And because papers covering multiple studies -- such as those bundling a negative study with a positive study -- were excluded, some studies that were technically published may have been counted as unpublished, they added.
The study was not designed to determine why the publication bias occurred, Dr. Turner said. However, it could have been because the researchers or pharmaceutical companies failed to submit manuscripts, because journal editors and reviewers rejected the studies, or both, he said.
"Perhaps the physician might be slightly less enthusiastic about prescribing them, realizing there are many trials where there was not a positive outcome," Dr. Turner concluded.
Dr. Turner reported having served as a medical reviewer for the FDA. No other potential conflict of interest relevant to this article was reported.
Primary source: New England Journal of MedicineSource reference:Turner EH, et al "Selective publication of antidepressant trials and its influence on apparent efficacy" N Engl J Med 2008; 358: 252-60.

No comments: