Showing posts with label Publication Bias. Show all posts
Showing posts with label Publication Bias. Show all posts

Monday, March 21, 2016

How have pharmaceutical companies corrupted medical literature?

Physicians, pharmacists, nurses, lawyers, administrators, policy makers and many others depend on medical journals for information.  As the best clinical outcomes are sought for each patient; evidence based practice is the standard.

Physicians look to medical journals for up to date, accurate information about current medications and treatment options.  Peer-reviewed journals containing double blinded randomized control trials are the gold standard, with a meta-analysis of those trials being the best evidence.

Prescribers make medication choices based on the published literature, their personal experience, and the experiences of their patients. 

What if our medical literature is being unduly influenced and altered by those with financial gains at stake?
What if throughout the process of testing and approving and marketing new medications, pharmaceutical companies are altering the information prescribers receive?

This article will discuss 7 stages at which biased and false information has already been and still may be introduced into medical literature.
  1. Data Ownership
  2. Drug Trial Design
  3. Data Analysis
  4. Ghostwriting Articles
  5. Publication Bias/Omitted Information
  6. Journal Reprints
  7. Advertisements in Journals


STAGE 1: Data Ownership 



Problem: Drug trials are often designed to ensure that the resulting data are owned by the pharmaceutical company and are never made available to clinical research sites, prescribers, or the public. 

Example: In Denmark, 44 industry-initiated randomized trials were approved in 1994-1995 by the Scientific-Ethical Committee for Copenhagen and Frederiksberg.  
There were constraints on the publication rights in 40 (91%) of the protocols. 22 (50%) noted that the sponsor either owned the data, needed to approve the manuscript, or both. None of these constraints were stated in any of the trial publications.

Problem: In the competition for research funds, American academic institutions are likely to compromise ethical standards, granting data ownership and more to pharmaceutical companies.

Example: In a survey of 107 American medical schools it was found that 80% would allow a multicenter trial agreement that granted data ownership to the sponsor. 69% of the administrators said that the competition for research funds created pressures on them to compromise the conditions of the contract.
This leads to the following problem, found in a second survey of American medical schools: In a survey of 108 American medical schools it was found that “Academic institutions routinely engage in industry-sponsored research that fails to adhere to ICMJE (International Committee of Medical Journal Editors) guidelines regarding trial design, access to data, and publication rights.”

When drug trials are pre-designed to grant data ownership, analysis, and manuscript approval to the industry sponsor, the potential for biased publications escalates.


STAGE 2: Drug Trial Design


Problem: In head to head drug trials, the standard medication may be dosed or administered incorrectly, making the new drug look better by faulty comparison.

Many drug trials are designed to compare a new medication to the current standard medication.  If the standard medication is dosed incorrectly, or administered in the wrong way, efficacy may decrease.  In a head to head comparison, this can lead to the incorrect conclusion that the new medication is better because it had higher efficacy than the standard medication.

Example: Prior to FDA approval of Voriconazole, a study was designed to compared Voriconazole to Amphotericin B in the treatment of invasive aspergillosis:

277 patients were randomized into the two treatment groups and completed the trial.  The standard dosing and route of administration was followed:
  • IV Voriconazole for 7 days, then oral medication.
  • IV Amphotericin B

However the length of treatment was substantially different for the two groups.
  • The median duration of voriconazole treatment was 77 days.
  • The median duration of amphotericin B treatment was 10 days.

With the new medication (Voriconazole) being given for an additional 67 days it is not surprising that the conclusion stated: “Initial therapy with voriconazole led to better responses and improved survival and resulted in fewer severe side effects than the standard approach of initial therapy with amphotericin B.”

STAGE 3: Data Analysis


Problem: Data analysis is often controlled by the industry sponsor and data are often manipulated in favor of the new drug.
Example: When trial endpoints are changed or modified, it is impossible to know if new drugs met their goals.  A 2011 Study analyzed all Randomized Control Trials published in 6 medical journals over a 2 year period (2008-2010).  The journals selected were: New England Journal of Medicine, Lancet, JAMA, Annals of Internal Medicine, BMJ, Archives of Internal Medicine. Out of 2,592 original articles which were reviewed, only 316 reported a pre-specified primary endpoint. We don’t know what the other 2,276 trials were hoping to prove when they started the trial. Only the sponsoring drug company and the FDA likely have that information.  
Of the 316 studies that stated their pre-determined endpoint, 116 (37%) ended up reporting a surrogate primary endpoint and 106 (34%) used a composite primary endpoint.
Surrogate and composite endpoints do not always represent findings that are clinically or statistically significant.
Also, of the 118 trials in which the primary endpoint involved mortality: 32 (27%) used disease-specific mortality rather than all-cause mortality. Thus we do not the cause of death of many patients who died during the trials, whether they were disease related or not.
These data manipulations were found to be more common in drug industry sponsored trials. 
Trials which were exclusively industry sponsored were 16% more likely to use surrogate endpoints than trials which had mixed funding or non-industry funding.
Industry funded trials were also 23% more likely to only report disease specific mortality endpoints.

STAGE 4: Ghostwriting Articles


Problem: Ghostwriting.  When you don’t know who wrote an article you cannot judge the content by the author’s expertise or ethics.

Example: From 1999-2001, 96 journal articles were published about sertraline (Zoloft). Over half of the articles were prepared by one medical writing agency named Current Medical Directions. Their 55 articles were all positive in their portrayal of Zoloft, and only 2 of them acknowledged writing support from people not listed as authors.  Who analyzed the data?  Who wrote the bulk of the text?  Who made the conclusions? We don’t know.  These potentially ghost-written articles were published in well-respected journals such as JAMA (Journal of the American Medical Association), JAACAP (Journal of the American Academy of Child and Adolescent Psychiatry), and Archives of Family Medicine.  The articles written by CMD had, on average, a higher impact factor as well as higher citation rates than the other 41 articles. 

Problem: Articles are often prepared based on a researcher’s study, and then sent to the researcher for their approval, listing the researcher as the author. 

Example: Dr. David Healy had performed research on anti-depressants.  He received an email from a drug company representative stating “In order to reduce your workload to a minimum, we have had our ghostwriter produce a first draft based on your published work. I attach it here.”
The article listed Dr. Healy as the sole author, yet he hadn’t written a single word.  He did not agree with their “glowing review of the drug” and he suggested some changes.  The drug company replied stating that he had missed some 'commercially important' points. The ghostwritten paper was later published in a psychiatric journal in its original form - under another doctor's name.


STAGE 5: Publication Bias/Omitted Information


Problem:  Critical information is often not published.

Example: When research misconduct occurs, it is not mentioned in journal articles based on those flawed studies.
Every year, the FDA inspects several hundred clinical sites performing biomedical research on human subjects. When they find evidence of research misconduct, they publish it in a report on their website. From 1998-2013, the FDA identified 57 clinical trials with serious problems including falsification of data, protocol violations,  and failure to protect the safety of patients.
Those 57 Trials led to 78 Publications.  Only 3/78 publications (4%) mentioned the objectionable conditions or practices found during the inspection.  No corrections, retractions, expressions of concern, or other comments acknowledging the key issues identified by the inspection were subsequently published.

Problem: If a study’s results are unfavorable to a new drug, they are often not published, leading to a publication bias in favor of the new medication.

Example: A search for all studies performed on 12 antidepressants found 74 trials registered with the FDA.  37 trials showed positive results for the antidepressant and all but one of them were published.  One trial had neutral results.  The other 36 studies showed negative or questionable results from the antidepressants.  22 of them were not published, 11 were published in such a way as to make the outcome appear positive, and only 3 were published showing the negative results.
Only Published Trials

Thus, in the published literature, 94% of antidepressant trials showed positive results.  By contrast, FDA analysis of all antidepressant trials showed that only 51% were positive.  It should also be noted that 3,449 study participants never had their data published.



 Including Unpublished Trials

     

STAGE 6: Journal Reprints


Problem: Medical Journals can earn higher profits if they publish pharmaceutical industry sponsored papers.  This gives an incentive to give those papers preferential treatment.

Example: Medical Journals make money of journal publications and reprints.  Journals which publish a study funded by the pharmaceutical industry have higher numbers of reprints ordered.
In a study looking at reprint orders, Papers funded by the pharmaceutical industry were more likely to have reprints ordered than were control papers (odds ratio of 8.64.)  Even if a study was only partially funded by pharmaceutical companies it was still more likely to be re-ordered, (odds ratio of 3.72).
This matters because there is substantial money to be made off reprints.
In a study of income from reprints, it was found that The BMJ made £12 ,458 on average for a reprint order.  Lancet earned £287,353 per reprint order.11

STAGE 7: Advertisements in Journals


Problem: Drug advertisements in Medical Journals are often misleading or inaccurate.
Example: In 1992, Annals of Internal Medicine examined the accuracy of advertisements in 10 medical journals.  They found 109 full page pharmaceutical advertisements. The ad and the cited source were sent to three reviewers, (2 physicians in the field and a clinical pharmacist).  They concluded that 34% of advertisements required major revisions and 28% should not have been published.
Example: A 2003 study published in Lancet analyzed all advertisements for anti-hypertensives or lipid lowering agents in 6 medical journals for a period of one year.   Out of 287 advertisements, only 125 listed at least one reference.  18% of those references could not be found.  44% of the references did not support the statement in the advertisement.

DISCUSSION


"Medical journals are an extension of the marketing arm of the pharmaceutical companies" according to Richard Smith who worked for 13 years as editor of The BMJ (British Medical Journal).

The deeper we dig the more evident it becomes that our medical literature is not as pure or objective as we might wish to believe.  Often our journals are just another form of advertising.  This is not limited to journals with small circulation numbers or case reports only.  The most well respected journals with the highest circulation including NEJM, JAMA, BMJ, Lancet, and others have all suffered bias from pharmaceutical companies.

The potential for bias is evident from the very beginning of the process.  From the initial design of drug trials, contractual agreements ensure that the results will be owned and analyzed only by the sponsoring company.  While they must register the trial with the FDA, they are under no obligation to publish the results of their study.  Papers can be written by anyone.  Ghostwriters are commonly employed by pharmaceutical companies to prepare positive papers which will then be published under a researcher’s name.  While this may have some valid benefits, such as freeing up time for the researcher to continue his/her work, it is disingenuous.  The data are only as good as the person analyzing and explaining them.  If we don’t know the true credentials of the author, nor their financial interests, how can we judge the validity of their findings?

The same is true of journal editors.  How can we judge the contents of a journal when there is financial incentive to publish pharmaceutical sponsored papers.

Pharmaceutical companies can do excellent, valid research and bring good medications to the market. Authors can be trustworthy, journal editors can be ethical and discerning, advertisements can be accurate.

However, often these things don’t occur.  Physicians and hospitals spend thousands of dollars subscribing to medical journals.  A subscription to one database of medical literature can cost up to $500 per year. 

Patients are told by advertisements to “Ask your doctor.”  When they do, they are seeking their physicians informed, educated opinion.  Is that opinion based on evidence and fact?  Or is it based on a paper that was published for financial gain, after being ghostwritten by an unknown author, based on a study which was analyzed to skew results, from data which are proprietary and cannot be re-examined, with a protocol that was altered or not followed in the first place.

Physicians are required to give all patients “informed consent.”  If the data are that suspect, is there really any such thing?

 - written by Matt Larsen D.O. (References to all studies and quotes are available)

Thursday, March 29, 2012

Is “Prozac” just “Placebo” misspelled?


A few facts:
In 2010 the top 5 antidepressants were prescribed 115 million times in the USA.

Antidepressants have side effects, the most common being weight gain, sexual dysfunction, and upset stomach. The most severe being suicidal thoughts and fatal birth defects.

Conclusion: These are not benign medications.

About me – I did not read just one article about antidepressants and then write a blog post about it. I have researched this topic. I have read many articles, and read the Newsweek article including the drug companies responses. I have read studies sponsored by drug companies as well as those with no “Pharm” funding whatsoever.

I wrote before about the publication bias in antidepressant research.
If we look at EVERY published and unpublished study submitted to the FDA for a set of antidepressants and compare the drug response with placebo response, guess what we find?

I'll give you a hint...it's depressing.

Everyone in the studies took a "depression rating test" before and after the drug trial.
If patients on Prozac said they got got better by 10 points, those on Placebo got by 8 points. Yes, Placebo had 80% of the effect.

You want the exact numbers?
There is a scale for rating depression called HAM-D which has been used since 1960 for pretty much all antidepressant trials. The higher you score, the more depressed you are.

0-7 = No Depression

8-13 = Mild Depression

14-18 = Moderate Depression

19-22 = Severe depression

>23 = Very Severe Depression

So you take the test before the drug trial. Then you get either an antidepressant or placebo and take it for a few weeks. Then you take the test again. How much lower your score is = your improvement.

Average improvement on antidepressants: 9.6 points (that would move you 1-2 levels up on the scale, like from severe to mild, and from moderate to mild or none.)

So that’s some real improvement

Average improvement on Placebo = 7.8

So the medication works better than placebo – but not much. How do we tell if that difference matters? Does a difference of 1.8 points matter?  If you do a statistical analysis – YEP, it’s significant.

If you do a clinical analysis (do the patients notice any real difference?) – NOPE, it's insignificant.

It turns out that to be clinically significant the drug needs to be at least 3 points better than the placebo. (So says the National Institute for Clinical Excellence)

SO WHAT DOES IT ALL MEAN?

Americans are taking drugs for depression and their depression is getting better. If they were getting a sugar pill (but thought it was the drug) they’d do just about as well.

So you tell me - Knowing the side effects and the costs – is it worth it?

Wednesday, February 22, 2012

Antidepressants - Publication Bias

In 2008 the FDA decided to look at all the published data about antidepressants.  They found that 51 studies had been published.  They looked back at their own records about how many studies had been done on antidepressants, and found the number to be much higher - 74.

Where were the missing 23 studies?  (Those 23 studies had 3,449 patients who went through full drug trials, and their results were never published)

Why weren't they published?

The FDA looked at the 51 studies that were published and the looked at the publication itself to see how the drugs were represented.  The graph below shows the result of that analysis:

(White Boxes above the red line are studies that showed that the drug worked.  Black Boxes below the red line are studies that had negative or mixed results)


Basically this says that out of the 51 studies, 48 showed that the drug had a positive effect.  That's 94% of all studies showing a positive result!
There were only 3 studies that showed that the drug had negative or mixed results. 
THEN, the FDA looked all 74 the trials that had been done, without looking at the publications.  They analyzed the data themselves to see what they would find.  Were the unpublished studies mostly positive, or negative?  Were all the published studies portrayed accurately?
The graph below is their result when they FDA analyzed the data from all 74 trials themselves:


Yep.  It turns out that 36 studies showed mixed or negative results.  Only 38 studies were positive.  The positive results dropped from 94% to 51%
All 23 missing studies showed negative or mixed results.  13 of the published studies were misrepresented in the publication to make them look like they had a positive result.

What does this say about "double-blinded, placebo controlled, randomized control trials?"  What does is it matter that the studies are done correctly if they are misrepresented or selectively published?

This is only part of the depressing news about anti-depressants.  More to come...

Source: New England Journal of Medicine Article