Is it really supported by science?

On its surface, this question isn't all that hard to answer. The typical internal translation is often, "Is there a published research paper that supports this?" While that's a very common thought, there are a few problems with it.

Problem #1: Confirmation bias.

So there's a paper with an affirmative conclusion. Is it the only paper on that subject? Are there papers with a negative conclusion?

Confirmation bias and cherry-picking can allow someone to paint a fairly abstract illustration of what research really has to say on a subject. Confirmation bias seeks out only affirmative research, while cherry-picking is intentional disregard of research that doesn't affirm your assertion. Both methods of research review fail to appropriately consider the entire body of research.

This does not mean that every single research paper on a subject must be read in order for a reader to have an opinion on the subject. In many cases, this is actually quite an onerous task. In my opinion, it is generally sufficient to include discussion of both affirmative and negative research.

Problem #2: You may be reading a lie.

Some people are not smart enough to understand what they've read. Some people don't even read the research papers that they cite. Some people are so disingenuous with their manipulation of the research that it is equivalent to a bald-faced lie.

A once prominent pitching voice* frequently claims that his hypothesis is supported by science; however, the paper he cites in his defense actually contains conclusions that neither support nor refute his hypothesis. The comment to which he often refers is actually just a hunch offered by the paper's primary author. Even the primary author mentions that the research does not support it!

The only way to parse through claims like this one is to read the research for yourself, especially when investigating a potential coach.

* This is vague on purpose. I am not trying to start a flame war here.

Problem #3: Non-specific conclusions, poorly worded abstracts.

I recently read a 21-year-old research paper for the first time. What caught my attention was the conclusion in the abstract that explicitly stated, "This finding suggests that the muscles on the medial side of the elbow do not supplant the role of the medial collateral ligament during the fastball pitch."

After digging into the paper, it's clear that this conclusion is not generally applicable as its language would suggest. The full text of the study states that every member of the test (injured) group had pain when they threw.

In other words, there were no asymptomatic injured pitchers, and since pain inhibits performance it is impossible to know which element was to blame for the measured differences: the structurally compromised UCL or the pain.

Everything about the study was fine except for the wording in the abstract. Because the abstract completely skips over the fact that the entire injured group actively felt pain, it's impossible to know without reading the full text that the abstract's conclusion was specific rather than general.

It would have been 100% accurate with only 4 extra words, "This finding suggests that the muscles on the medial side of the elbow do not supplant the role of the medial collateral ligament during the fastball pitch in injured, symptomatic pitchers." Those 4 words pack a lot of meaning into the conclusion.


One of the tougher issues that I think a lot of people have with research papers is understanding exactly what they're reading. Frequently, people only have access to a paper's abstract, and as described above, that can be pretty misleading.

Maybe it's just delusions of grandeur on my part, but I'm planning a research review series that will aim to dig into the guts of some published research on pitching, throwing, and arm health. Features will include study design, discussion topics (some papers have extremely interesting discussion sections), and conclusion analyses. Look for it in the coming weeks.