Forgive me and the title of this article. It’s a bit more dramatic than how I normally roll. In 2020, real estate is in the search engines, so we have to write titles to feed that beast.
Let’s pretend an article, infographic, or video that’s talking about the latest and greatest COVID-prevention supplement or cocktail gets shared to you. I know it’s far-fetched, but let’s pretend a bit…
You, being a savvy, sophisticated, supplement strategist who follows me and my methodology, knows darn well that the enemy of our times is misinformation. You don’t want to be like those OTHER folks who just take it at face value, jumping in both feet and completely changing up their regimen based on what amounts to hearsay. Not you.
In fact, you’re like Beyonce: an independent woman (Or man! I often channel my inner Beyonce…) who wants to roll up your sleeves and vet this information yourself. In this article, I’m going to help you understand medical journals via 3 reference red flags.
Medical Journals are Complicated
This must be stated clearly and without offense to you, the common reader of my blog: you are not trained to vet medical information. Probably.
Fun fact: modern pharmacy education centers around the validation of clinical studies on drugs, their risks, and their benefits (though it seems many of my colleagues have forgotten these lessons).
It starts really early in our education and becomes a part of practically every course or lesson. We get pretty aggressive; we’re expected to beat up an article down to the statistical methods they use.
I say that, to say this: we still pretty much stink at it. ESPECIALLY after years of practicing outside intense clinical arenas.
I’m not asking you to become as good as I was when I was being trained, I’m asking you to understand one simple fact: the quality of the reference dictates how much confidence you should have in the advice or recommendation.
Red Flag 1 – No References at All (or unwilling to provide them)
Recommendations, especially ones based on new trends, science, or pandemics, should be based on some reference.
While it’s common to write editorials or infographics without references in 2020, it’s irresponsible to make radical recommendations without some solid science.
Don’t make changes to your wellness strategy based on something that lacks references.
An example, throwing myself under the bus a bit: my simple guide to cold recommendations that I share widely have no references on the document itself, but I certainly have said references behind the scenes and would gladly share them.
Red Flag 2: “In mice” and “In vitro”
While there are exceptions, people aren’t rodents.
Animal studies, in general, are nothing we can make solid conclusions about. What they normally tell us is if something COULD be safe in people. They also help us identify mechanisms for how something could potentially work in our bodies and biological systems.
Animal studies should be regarded as something that tells us, “we should look into this in humans and see if this might be an option worth exploring.”
This can also be applied to “in vitro” studies, which look at the effects outside of the human bodies, normally in petri dishes, test tubes, and lab models. Again, I’ve known a few people with the personality of a petri dish, but our complex body systems are far different.
These types of studies are generally disregarded by responsible practitioners.
A great follow-up to this article is my podcast with Dr. James Heathers, Ph.D. He pokes fun at the scientific media using animal studies to base their articles on, that he created a Twitter handle “@JustSaysInMice“.
Red Flag 3: Retrospective Studies
The best kind of study is a big one, with proper controls to eliminate all the variables caused by complicated humans and their complicated bodies, that are done in a prospective manner.
This means we start out with the intention of measuring an effect in a group, then determining if our theory was on the money or not.
Alternatively, we can look at an already completed study or some other historical data and analyze it to uncover other relationships or bits of information that may be helpful.
Retrospective studies CAN be valid and helpful, but there are a number of disadvantages that lower their reliability. We can’t control variables after the fact. Our statistics are restricted. We can’t account for some biases.
Because you and I are not those trained experts I discussed at the onset of this article, you should view retrospective studies as a red flag. As with the rest of the red flags here, you should have less confidence in recommendations based on retrospective studies—especially if they also have other flaws.
Red Flag 4: Theory Passed Off as Gospel
Here are the titles of three journal articles used as a reference for what I believe to be an unfounded COVID supplement protocol:
- “Immune-boosting role of vitamins D,C,E, zinc, selenium and omega-3 fatty acids: could they help against COVID-19”
- “Synergistic effect of quercetin and vitamin C against COVID-19: Is a possible guard for front liners?”
- “Melatonin for the treatment of sepsis: the scientific rationale”
You may see these headlines and think, “Oh man, if they’re thinking that I DEFINITELY need to add these to my regimen.”
The thing is, these are very weak pieces of literature. They’re normally discussing theory. They may be looking at other poor data (like in vitro, animal, or retrospective trials) and extrapolating from there.
A majority of the time, I think that’s the point. The title of a “medical study” is much like the clickbait article titles of the world today. Like the one I chose for this article, for example. They’re meant to get attention, hoping no one reads the darn thing and sees it lacks any substance.
And what’s weird is that folks will reference these and it becomes this weird circular system where a garbage article is referencing another garbage article.
More of these are written, and then some charlatan will say, “There’s a wide amount of evidence” just because the topic shows up a bunch in Pubmed.
Yellow Flags That Point To Poor Quality References
Here’s the thing: in the absence of good data, we have to use OK data. In the absence of OK data, we often are stuck with trash.
A responsible medical professional will make solid recommendations based on the best available data. Most importantly, they’ll preface advice if based on weaker data. What they will not do is use these paper-thin justifications to fool you into taking their advice.
I think it’s important that we explore everything. Remember this article’s main thesis: we shouldn’t be changing OUR behaviors based on speculation, especially with such thin and weak justification.
With that being said, there are some “yellow flags” you should keep an eye out for. These are less make or break and often require a seasoned approach to fully interpret, but should be used to support a recommendation with caution.
There are a plethora of examples I could put here, but I’ll stick to two.
The study isn’t directly applicable.
Someone shared this with me: Melatonin is significantly associated with survival of intubated COVID-19 patients
Now, the title is titillating… “Melatonin is significantly associated with survival” is eye-catching, to say the least. But if we pay attention, the last words are “in intubated patients.”
We can’t always apply results from one population to another. In the example here, melatonin theoretically could ONLY help if you were so sick you needed intubation. People in the community taking melatonin regularly may not have any benefit at all in preventing or decreasing the severity of COVID.
If people start using this in the community, they may get no benefits but have to contend with the risks.
This is a part of the supplement-for-everything mindset that is missing; supplements aren’t benign, natural compounds with no risk. We MUST stop overplaying any potential upside while actively ignoring downsides of supplements.
I’ve talked extensively of the risk of melatonin, especially when used nonchalantly as we tend to do here in this country. A medical professional telling people to use melatonin for COVID prevention because of this study is misadvising their patients, especially if this data is what they’re basing the recommendation on.
Small “sample size.”
There are 7.7 billion people in the world. To accurately test a theory, we’d technically need to do the “thing” to every single person to see what would happen.
That’s a bit impractical, so we can use the idea that we can extrapolate to our advantage.
Study nerds determine how many people are needed, at a minimum, to assess their theory. Small trials are often pilot studies to see if we measure ANY difference or effect. Ideally, we use bigger and bigger well-designed trials to improve our confidence in the theory.
I don’t want to split hairs here; I’m talking less than 100 people in a study is a yellow flag for the quality of the study.
At best, small studies give us hope that bigger studies are more than justified and should be explored. It tells us where to allocate our resources.
Understanding Medical Journals: A Review
Things that are completely lacking references, referring to test tube or animal studies or looking retrospectively at a study all speak to very low-quality information.
In a world where misinformation is the enemy of our times, we must each become better interpreters of science and data.
We can’t just assume. Those days are LONG gone. We can’t just say to ourselves, “That person SOUNDS trustworthy!” Even. With. Me.
We must become sophisticated consumers—not of stuff, but information. To usher out the era of unprecedented misinformation, we need an era of unprecedented responsibility.
We need to look past the claims to the references—even if briefly—on the lookout for glaring problems. Hopefully, these red flags we’ve identified will help you understand references and journal articles better.
And no, melatonin, quercetin, zinc, and Vitamin C will prevent you from catching COVID.
Just trying to keep it real…
Neal Smoller, PharmD
Owner, Pharmacist, Big Mouth