top of page
  • Writer's pictureSAKS MedComms Team

3 Things Most People Get Wrong About P-values


 

For those of us in medical writing and marketing there is nearly constant exposure to scientific studies and probability values (though more commonly referred to as p-values). We all know the many useful aspects to reporting and using p-values to assess data. But as with any statistical tool we must fully understand the context in which it was derived or else we open ourselves up for misinterpreting the information in front of us.


So, you ask, what are these 3 things most people get wrong when it comes to p-values? Below is a list we have compiled on potential p-value pitfalls:


1. 0.05 is the magic number

Collectively we agree that p-values lower than 0.05 indicate significant results and that the null hypothesis can be rejected. However this does not mean p-values greater than 0.05 aren’t significant. Studies with a small N (sample size) or assumptions of normality, can have larger p-values that may in fact be significant. So it is better to view a p-value greater than 0.05 as there isn’t enough evidence to reject the null hypothesis, not that the null hypothesis is correct or even that the alternative hypothesis is correct.


At SAKS Health we partner with several companies in the rare disease space, where we work with clinical data from studies with very small sample sizes. When working in this space we’re very mindful of the fact that these sample sizes and results are quite different from those of studies in non-rare disease categories with larger sample sizes.


For tests that generate a less than 0.05 p-value, this can also be misleading as there may be other factors at play that led to that statistically significant outcome. This isn’t to say we should ignore p-values, rather we should attempt to take into consideration the numerous variables that could have influenced whatever number p is equal to, which leads into our second point.


2. Extrapolating from the p-value

In the one specific study that you are viewing you know if there was or wasn’t statistical significance, however the generalizability of these findings are not necessarily known. There is currently a strong push for more replication studies. To have a different team of researchers be able to reproduce the findings from a study gives the initial study results potentially more credibility.


As we know, there are many challenges researchers face simply getting a study completed and published, adding a second study of replicability not only slows the process but adds to the number of challenges, especially the budget. There isn’t a simple nor easy solution to increasing the number of replication studies, but they are vitally important to acknowledge the usefulness of results. If you’re reviewing a study and the p-values associated with it, be aware that those numbers indicate only that very specific study situation.


We also find that when it comes to communications with payers and organized customers, this audience has a strong preference for seeing and hearing about the real world applicability and what results a drug will have in different sub-populations of patients. While it is difficult to generate a real world study that replicates the results of the clinical trials, there are other methods that can be used, which gets us to our next point.


3. The p-value is the most important number on the page

While a p-value does tell us important information, it can not tell us the meaning, importance, or actual clinical value. When it comes to publishing a study, p-values are still the prized statistic, but for actual usefulness of a study there are other analyses that should be run to help articulate a more comprehensive picture of what the data are telling us. Effect sizes can help provide practically meaningful insights, confidence intervals help illustrate the probability of the p-value, the implementation of replication studies, to the design of the study itself all provide useful information to use while evaluating the study and its associated p-values.


This isn’t to assume researchers and scientists don’t want to be able to provide detailed analyses and replication studies of their work, the problem lies more at the heart of scientific research and the overall lack of funding and acceptance of failure in the field. Until globally we have agreed that science is built on learning from failures and providing more resources to address these needs it will be important for those of us interpreting p-values to acknowledge the pitfalls to viewing it as a standalone number and indicator of success.


So, if you’re viewing and using p-values in your day to day job you may not always have the time to do a comprehensive critique of the data in front of you, but being able to acknowledge that the p-value is just one aspect to the study can go a long ways in not only your personal understanding of the material but also in your eventual presentation of that content to others.


Be sure to check back in at SAKS Health as we’ll have a follow-up blog post that looks more closely at value assessment challenges for products under development to treat rare diseases and other conditions with high unmet needs and data limitations.

 
bottom of page