Showing posts with label relative risk. Show all posts
Showing posts with label relative risk. Show all posts

Sunday, May 22, 2022

Common Things Are Common, But What is Common? Operationalizing The Axiom

"Prevalence [sic: incidence] is to the diagnostic process as gravity is to the solar system: it has the power of a physical law." - Clifton K Meador, A Little Book of Doctors' Rules


We recently published a paper with the same title as this blog post here. The intent was to operationalize the age-old "common things are common" axiom so that it is practicable to employ it during the differential diagnosis process to incorporate probability information into DDx. This is possible now in a way that it never has been before because there are now troves of epidemiological data that can be used to bring quantitative (e.g., 25 cases/100,000 person-years) rather than mere qualitative (e.g., very common, uncommon, rare, etc) information to bear on the differential diagnosis. I will briefly summarize the main points of the paper and demonstrate how it can be applied to real-world diagnostic decision making.

First is that the proper metric for "commonness" is disease incidence (standardized as cases/100,000 person-years) not disease prevalence. Incidence is the number of new cases per year - those that have not been previously diagnosed - whereas prevalence is the number of already diagnosed cases. It the disease is already present, there is no diagnosis to be made (see article for more discussion of this). Prevalence is approximately equal the product of incidence & disease duration, so it will be higher (oftentimes a lot higher) than incidence for diseases with a chronic component; this will lead to overestimation of the likelihood of diagnosing a new case. Furthermore, your intuitions about disease commonness are mostly based on how frequently you see patients with the disease (e.g., SLE) but most of these are prevalent not incident cases so you will think SLE is more common than it really is, diagnostically. If any of this seems counterintuitive, see our paper for details (email me for pdf copy if you can't access it).

Second is that commonness exists on a continuum spanning 5 or more orders of magnitude, so it is unwise to dichotomize diseases as common or rare as information is lost in doing so. If you need a rule of thumb though, it is this: if the disease you are considering has single-digit (or less) incidence in 100,000 p-y, that disease is unlikely to be the diagnosis out of the gate (before ruling out more common diseases). Consider that you have approximately a 15% chance of ever personally diagnosing a pheochromocytoma (incidence <1/100,000 P-Y) during an entire 40 year career as there are only 2000 cases diagnosed per year in the USA, and nearly one million physicians in a position to initially diagnose them. (Note also that if you're rounding with a team of 10 physicians, and a pheo gets diagnosed, you can't each count this is an incident diagnosis of pheo. If it's a team effort, you each diagnosed 1/10th of a pheochromocytoma. This is why "personally diagnosing" is emphasized above.)  A variant of the common things axiom states "uncommon presentations of common diseases are more common than common presentations of uncommon diseases" - for more on that, see this excellent paper about the range of presentations of common diseases.

Third is that you cannot take a raw incidence figure and use it as a pre-test probability of disease. The incidence in the general population does not represent the incidence of diseases presenting to the clinic or the emergency department. What you can do however, is take what you do know about patients presenting with a clinical scenario,  and about general population incidence, and make an inference about relative likelihoods of disease. For example, suppose a 60-year-old man presents with fever, hemoptysis and a pulmonary opacity that may be a cavity on CXR. (I'm intentionally simplifying the case so that the fastidious among you don't get bogged down in the details.) The most common cause of this presentation hands down is pneumonia. But, it could also represent GPA (formerly Wegener's, every pulmonologist's favorite diagnosis for hemoptysis) or TB (tuberculosis, every medical student's favorite diagnosis for hemoptysis). How could we use incidence data to compare the relative probabilities of these 3 diagnostic possibilities?


Suppose we were willing to posit that 2/3rds of the time we admit a patient with fever and opacities, it's pneumonia. Using that as a starting point, we could then do some back-of-the-envelope calculations. CAP has an incidence on the order of 650/100k P-Y; GPA and TB have incidences on the order of 2 to 3/100k PY respectively - CAP is 200-300x more common than these two zebras. (Refer to our paper for history and references about the "zebra" metaphor.)  If CAP occupies 65% of the diagnostic probability space (see image and this paper for an explication), then it stands to reason that, ceteris paribus (and things are not always ceteris paribus), the TB and GPA occupy on the order of 1/200th of 65%, or about 0.25% of the probability space. From an alternative perspective, a provider will admit 200 cases of pneumonia for every case of TB or GPA she admits - there's just more CAP out there to diagnose! Ask yourself if this passes muster - when you are admitting to the hospital for a day, how many cases of pneumonia do you admit, and when is the last time you yourself admitted and diagnosed a new case of GPA or TB? Pneumonia is more than two orders of magnitude more common than GPA and TB and, barring a selection or referral bias, there just aren't many of the latter to diagnose! If you live in a referral area of one million people, there will only be 20-30 cases of GPA diagnosed in that locale during in a year (spread amongst hospitals/clinics), whereas there will be thousands of cases of pneumonia.

As a parting shot, these are back-of-the-envelope calculations, and their several limitations are described in our paper. Nonetheless, they are grounding for understanding the inertial pull of disease frequency in diagnosis. Thus, the other day I arrived in the AM to hear that a patient was admitted with supposed TTP (thrombotic thrombocytopenic purpura) overnight. With an incidence of about 0.3 per 100,000 PY, that is an extraordinary claim - a needle in the haystack has been found! - so, without knowing anything else, I wagered that the final diagnosis would not be TTP. (Without knowing anything else about the case, I was understandably squeamish about giving long odds against it, so I wagered at even odds, a $10 stake.) Alas, the final diagnosis was vitamin B12 deficiency (with an incidence on the order of triple digits per 100k PY), with an unusual (but well recognized) presentation that mimics TTP & MAHA.

Incidence does indeed have the power of a physical law; and as Hutchison said in an address in 1928, the second  commandment of diagnosis (after "don't be too clever") is "Do not diagnose rarities." Unless of course the evidence demands it - more on that later.

Thursday, May 17, 2018

Increasing Disparities in Infant Mortality? How a Narrative Can Hinge on the Choice of Absolute and Relative Change

An April, 11th, 2018 article in the NYT entitled "Why America's Black Mothers and Babies are in a Life-or-Death Crisis" makes the following alarming summary statement about racial disparities in infant mortality in America:
Black infants in America are now more than twice as likely to die as white infants — 11.3 per 1,000 black babies, compared with 4.9 per 1,000 white babies, according to the most recent government data — a racial disparity that is actually wider than in 1850, 15 years before the end of slavery, when most black women were considered chattel.
Racial disparities in infant mortality have increased since 15 years before the end of the Civil War?  That would be alarming indeed.  But a few paragraphs before, we are given these statistics:

In 1850, when the death of a baby was simply a fact of life, and babies died so often that parents avoided naming their children before their first birthdays, the United States began keeping records of infant mortality by race. That year, the reported black infant-mortality rate was 340 per 1,000; the white rate was 217 per 1,000.
The white infant mortality rate has fallen 217-4.9 = 212.1 infants per 1000.  The black infant mortality rate has fallen 340-11.3 = 328.7 infants per 1000.  So in absolute terms, the terms that concern babies (how many of us are alive?), the black infant mortality rate has fallen much more than the white infant mortality rate.  In fact, in absolute terms, the disparity is almost gone:  in 1850, the absolute difference was 340-217 = 123 more black infants per 1000 births dying and now it is 11.3-4.9 = 6.4 more black infants per 1000 births dying.

Analyzed a slightly different way, the proportion of white infants dying has been reduced by (217-4.9/217) 97.7%, and the proportion of black infants dying has been reduced by (340-11.3/340)= 96.7%.  So, within 1%, black and white babies shared almost equally in the improvements in infant mortality that have been seen since 15 years before the end of the Civil War.  Or, we could do a simple reference frame change and look at infant survival rather than mortality.  If we did that, the current infant survival rate is 98.87% for black babies and 99.51% for white babies.  The rate ratio for black:white survival is .994 - almost parity depending on your sensitivity to variances from unity.

It's easy to see how the author of the article arrived at different conclusions by looking only at the rate ratios in 1850 and contemporaneously.  But doing the math that way makes it seem as if a black baby is worse off today than in 1850!  Nothing could be farther from the truth.

You might say that this is just "fuzzy math" as our erstwhile president did in the debates of 2000.  But there could be important policy implications also.  Suppose that I have an intervention that I could apply across the US population and I estimate that it will save an additional 5 black babies per 1000 and an additional 3 white babies per 1000.  We implement this policy and it works as projected.  The black infant mortality rate is reduced to 6.3/1000 and the white infant mortality rate is 1.9/1000.  We have saved far many black babies than white babies across the population.  But the rate ratio for black:white mortality has increased from 2.3 to 3.3!  Black babies are now 3 (three!) times as likely to die as white babies!  The policy has increased disparities even though black babies are far better off after the policy change than before it.

It reminds me of the bias where people would rather take a smaller raise if it increased their standing relative to their neighbor.  Surprisingly, when presented with two choices:
  1. you make $50,000 and your peers make $25,000 per year
  2. You make $100,000 and your peers make $250,000 per year
many people choose 1, as if relative social standing is worth $50,000 per year in income.  (Note that relative social standing is just that, relative, and could change if you arbitrarily change the reference class.)

So, relative social standing has value and perhaps a lot of it.  But as regards the hypothetical policy change above, I'm not sure we should be focusing on relative changes in infant mortality.  We just want as few babies dying as possible. And it is disingenuous to present the statistics in a one-sided, tendentious way.

Wednesday, October 24, 2012

A Centrum a Day Keeps the Cancer at Bay?


Alerted as usual by the lay press to the provocative results of a non-provocative study, I read with interest the article in the October 17th JAMA by Gaziano and colleagues: Multivitamins in the Prevention of Cancer in Men. From the lay press descriptions (see: NYT summary and a less sanguine NYT article published a few days later,) I knew only that it was a positive (statistically significant) study, that the reduction in cancer observed was 8%, that a multivitamin (Centrum Silver) was used, and the study population included 14,000 male physicians.

Needless to say, in spite of a dormant hope something so simple could prevent cancer, I was skeptical. Despite decades, perhaps eons of enthusiasm for the use of vitamins, minerals, and herbal remedies, there is, to my knowledge (please, dear reader, direct me to the data if this is an omission) no credible evidence of a durable health benefit from taking such supplements in the absence of deficiency. But supplements have a lure that can beguile even the geniuses among us (see: Linus Pauling). So before I read the abstract and methods to check for the level of statistical significance, the primary endpoint, the number of endpoints, and sources of bias, I asked myself: "What is the probability that taking a simple commercially available multivitamin can prevent cancer?" and "what kind of P-value or level of statistical significance would I require to believe the result?" Indeed, if you have not yet seen the study, you can ask yourself those same questions now.