The PI’s Guide to Running a Bad Study

The PI’s Guide to Running a Bad Study

Copyright: radiantskies / 123RF Stock Photo

Before the study

  1. Don’t ask a medical librarian to do systematic search of the literature to see what’s already been published.
  2. If you do happen to do a literature review, make sure to ask a research assistant to scour the depths of PubMed.
  3. Don’t consult a biostatistician before embarking on your courageous decision to run a trial. Who needs em?!
  4. Don’t pre register your trial, it’s overrated. Plus, it’ll prevent you from fishing data (you don’t want that).
  5. Don’t come up with a hypothesis now, you do this AFTER you get your results. Trust me on this.
  6. Don’t do a power analysis/design analysis.
  • If you do happen to mistakenly do a power analysis, make sure to find the largest effect size possible that will give you 80% power and brag about that in your grant.
  • When looking for effect sizes to do a power analysis, make sure to use the published literature because who cares about publication bias, yeah?

Suspect that your intervention may not be that great, when compared to anything? Here’s a guide to making sure you get great results.

  • use a high dose of your intervention, and a low dose of the comparator (superiority).
  • use a high dose of the control and a low dose of your intervention to make the comparator look toxic (safer).
  • focus on the shortest endpoints to establish there’s no difference between the two so that they are equivalent (equivalence).
  • Also, save your money and use a super small sample size to find no difference (equivalence).

During the study

  1. Forget about blinding your participants, or randomizing them (You can just adjust for the confounders later!).
  2. Use small samples so that randomization doesn’t really serve its purpose. No one cares about the law of large numbers. Get out.
  3. When collecting data, have the research assistants do everything, I mean they obviously care about the study more than anything else, right?
  4. Are your participants dropping out horrendously? Forget em, burn their data and keep the data of those who stayed. Bias associated with the interventions is irrelevant and only losers impute data.

Results  —  Statistical Analyses

  1. Between-group analyses are not useful, only use WITHIN – group analyses for maximal significance.
  2. Send your results over a to a statistician to find significant ones and delete the nonsignificant ones (if they don’t do this for you, tell them they suck).
  3. Thinking about reporting both intent-to-treat analyses and per-protocol analyses? hahahaha, you must not want to be in academia.
  4. Effect sizes and standard deviations aren’t important. Nor are confidence intervals nor p-values (they’re cousins). Just report that your results were significant. That’s it. Wanna go full bayesian?
  5. Say you got a super large bayes factor, now you’re loved amongst the subjective nerds.
  6. Transform your data as many times as necessary.
  7. Get rid of data points that look like outliers to you.

Publishing Time

  1. Splice the results and make multiple papers out of this study.
  2. Ask a research assistant to support your hypothesis/hypotheses by finding studies to support it in your introductions and discussions.
  3. Submit the papers ONLY to predatory journals AKA illegitimate publishers. They’re looking out for you.
  4. Make your results seem better than they really are in the press release and explain how your study is groundbreaking.
  5. Do as many interviews with science journalists as possible and use jargon that you don’t understand, this is key.

Leave some grass