Copyright: radiantskies / 123RF Stock Photo
Before the study
- Don’t ask a medical librarian to do systematic search of the literature to see what’s already been published.
- If you do happen to do a literature review, make sure to ask a research assistant to scour the depths of PubMed.
- Don’t consult a biostatistician before embarking on your courageous decision to run a trial. Who needs em?!
- Don’t pre register your trial, it’s overrated. Plus, it’ll prevent you from fishing data (you don’t want that).
- Don’t come up with a hypothesis now, you do this AFTER you get your results. Trust me on this.
- Don’t do a power analysis/design analysis.
- If you do happen to mistakenly do a power analysis, make sure to find the largest effect size possible that will give you 80% power and brag about that in your grant.
- When looking for effect sizes to do a power analysis, make sure to use the published literature because who cares about publication bias, yeah?
Suspect that your intervention may not be that great, when compared to anything? Here’s a guide to making sure you get great results.
- use a high dose of your intervention, and a low dose of the comparator (superiority).
- use a high dose of the control and a low dose of your intervention to make the comparator look toxic (safer).
- focus on the shortest endpoints to establish there’s no difference between the two so that they are equivalent (equivalence).
- Also, save your money and use a super small sample size to find no difference (equivalence).
During the study
- Forget about blinding your participants, or randomizing them (You can just adjust for the confounders later!).
- Use small samples so that randomization doesn’t really serve its purpose. No one cares about the law of large numbers. Get out.
- When collecting data, have the research assistants do everything, I mean they obviously care about the study more than anything else, right?
- Are your participants dropping out horrendously? Forget em, burn their data and keep the data of those who stayed. Bias associated with the interventions is irrelevant and only losers impute data.
Results — Statistical Analyses
- Between-group analyses are not useful, only use WITHIN – group analyses for maximal significance.
- Send your results over a to a statistician to find significant ones and delete the nonsignificant ones (if they don’t do this for you, tell them they suck).
- Thinking about reporting both intent-to-treat analyses and per-protocol analyses? hahahaha, you must not want to be in academia.
- Effect sizes and standard deviations aren’t important. Nor are confidence intervals nor p-values (they’re cousins). Just report that your results were significant. That’s it. Wanna go full bayesian?
- Say you got a super large bayes factor, now you’re loved amongst the subjective nerds.
- Transform your data as many times as necessary.
- Get rid of data points that look like outliers to you.
- Splice the results and make multiple papers out of this study.
- Ask a research assistant to support your hypothesis/hypotheses by finding studies to support it in your introductions and discussions.
- Submit the papers ONLY to predatory journals AKA illegitimate publishers. They’re looking out for you.
- Make your results seem better than they really are in the press release and explain how your study is groundbreaking.
- Do as many interviews with science journalists as possible and use jargon that you don’t understand, this is key.