Vol. 154 September 15, 2016 READER BEWARE, Take a Grain of Salt With Media Hype About Medical Advances

Hub thumbnail 2015Headlines that tout a new drug or a new procedure which is “much better” than the old one are very common in our media. Some of them are true. Some of them are misleading. Most of them depend on the definition of “better” in the research study or clinical trial. A recent issue of the New England Journal of Medicine reviewed the “changing face of clinical trials” and outlined in detailed, technical language what their readers (physicians and other health professionals) should look for in published studies and clinical trials to confirm that the simplified “positive outcome reported” is significant and relevant. (1)

It inspired me to give similar “heads-ups” to my more general readers so they might be better evaluators of media announcements and commercials about medical advances.

Be skeptical about percentages
“Drug A has 50% less side effects than Drug B” or “Drug A is 50% more effective than Drug B.”
If 2 out 100 patients had a side effect with Drug A and Drug B side effects happened in 4 of 100 patients, that is a 50% reduction of a very low occurrence event, and it is probably not relevant.

“Antibiotics reduced the time out of work (or out of school, or days of fever) by 50%”.
This could mean “time absent” went from 2 days to 1 day, not all that significant considering the cost and potential side effects of antibiotics.

For those of you who want to dig deeper you should ask for the P value of the positive outcome. A statistical P value of 0.05 means that the difference between the two treatments is not enough to say that one was better than the other. The difference is “not significant”. In medical studies the test of a true difference is a P value of less than 0.001; written as P<0.001. That difference is “significant”. Looking at P values is an easy way to avoid the illusionary trap of percentages.

“Dementia Incidence is Decreasing!”
This was the February 2016 “headline”, admittedly in the back pages or side bars, in several newspapers and magazines. It was based on data from the ongoing, well-respected Framingham Heart Study that has been studying the same people since 1975. The article listed declines of 22%, 38%, and 44% each epoch (an epoch is about 15 years) from 1975 to 2010 in 5205 persons over 60 years old.
Looks impressive!
Again the percentages.
The actual incidence went from 2.8 per 100 persons demonstrating dementia to 2.0 per 100. These numbers seem a bit less dramatic to me. To top it all off, the risk reduction was observed in ONLY those who had at least a high school diploma. I’m glad that I am in that population subgroup, but that suggests an issue about the relevance of study results to the general population.

Is the positive outcome of the study clinically relevant?
Tests of some new drugs treating diabetes have shown a much better control of blood sugars, but NO reduction in cardiovascular events and even a HIGHER mortality rate.

Certain cancer tests may be shown to find cancers earlier, but there is no reduction in patient morbidity and mortality. The PAS test for prostate cancer “found” a lot more cases of prostate cancer, but did not result in any reduction of deaths from prostatic cancer. Later studies even showed that the PAS test often resulted in unnecessary further tests and treatment, so the age criteria recommendations for obtaining a PAS were changed in 2012.

Multiple studies of ICU patients have shown “better” physiological or laboratory value resulting from selected treatments, but NO change in length of stay or mortality rates in those patients receiving the new treatment.

Is the study large enough to be reliable?
This can be tricky. The study should involve enough patients to be statistically sound (there’s the old P<0.001 value again), but big numbers are not a guarantee. A recent article on the effectiveness of CPAP (continuous positive airway pressure) treatment for Obstructive Sleep Apnea (OSA) was based on studying close to 2500 patients. Sounds big to me, but look how they got to that number.

15,325 patients were assessed for eligibility in the study.
.        
9481 declined to participate or were excluded for other reasons
leaving 5844 that met the study’s diagnostic criteria
.         2598 were then excluded for having too mild symptoms
leaving 3246 who entered a one-week trial period
.          
529 were then excluded for poor compliance or other reasons
leaving 2717 patients that were randomized into the study
.            
30 were then excluded from the analysis for a variety of reasons
leaving 1346 receiving the new treatment and 1341 receiving standard treatment
.            62 receiving the new treatment discontinued
            85 receiving the standard treatment discontinued.
Resulting in 1284 analyzed for the new treatment and 1256 analyzed for the standard treatment.

Besides suggesting how difficult the logistics of a clinical study can be, a markedly descending number of study participants like this can raise concerns about a selection bias of patients, or as they say, “There’s many a slip twixt the cup and the lip.”

Oh, yeh, the results of the study?
“CPAP treatment significantly (P<0.001 again) reduced snoring and daytime sleepiness, but did not prevent cardiovascular events (P values 0.96 to 0.07)”.


Also there were so many variables in this complex study like “duration of use” (3.3 hrs. a night average) , “degrees of compliance” with protocols, different “severity of symptoms”, etc. that the NEJM felt compelled to publish in the same issue an editorial suggesting caution about the impact of this study on current clinical practice (see comments about clinical relevance above).

Conclusion:
More often than not the new procedure or the new drug is more expensive than the “old” one. That adds another reason to ask your doctor if it is really better than the previous one. Remembering that “if it happens to me it’s 100%”, what is the patient  supposed to do? How can we evaluate this bombardment of new advances?

“Ultimately, physicians at the point of care bear the final responsibility for accurately interpreting clinical trial results and for integrating regulatory and guideline recommendations to make the best treatment decisions for each patient in their care” (1)

References:
1. “The Primary Outcome is Positive – Is That Enough?”, New England Journal of Medicine, Sept. 8, 2016, 375;10 p.371

Leave a comment