The Curious Incident of Death Rates in the 20th Century

How we measure vaccine outcomes is a matter of life and death. Should we use cases/incidence of a disease or should we use mortality rates when measuring vaccine success? If you visit “sciencebasedmedicine” they suggest that you should use incidents of measles to measure vaccine effectiveness. If you look at the following graph, you can see that it matters quite a bit! The left panel suggests that vaccines are incredibly effective, while the right panel suggests that vaccines showed up late to the party. It is no wonder we are so polarized! So this is a question that we need to be very concerned about because the health of our children depends on an honest answer.

Measles Incidence (CDC Data)

Measles Incidence (CDC Data)

United States Measles Mortality Rates

United States Measles Mortality Rates

First, some quick definitions.

  1. Mortality rates = deaths from a disease / total population
  2. incidence = # cases / total population.

Hence, mortality rates are not number of deaths divided by the number sick, in case that was a definition one or two of you might have had.

Sciencebasedmedicine criticizes the use of death-rates because death-rates were falling due to the advancement of medicine, improved sanitation, and nutrition. In their  view, the advancement of medicine allowed a person afflicted by polio-induced paralysis to live possibly for many years to come. To them, mortality rates doesn’t measure exactly what it is we want to measure.

The Case Against Incidence

The strongest criticism I have for using incidence is summarized in a classic Statistics Textbook aptly called “How to Lie With Statistics”. See page 84 for a description of how ‘cases’ fall victim to the era’s fashion of diagnosis:

…a general consciousness of polio was leading to more frequent diagnosis and recording of mild cases. Finally, there was an increased financial incentive, there being more polio insurance and more aid available from the National Foundation for Infantile Paralysis. All this threw considerable doubt on the notion that polio had reached a new high.

What Huff concludes in this Statistics Textbook gem is that

It is an interesting fact that the death rate or number of deaths often is a better measure of the incidence of an ailment than direct incidence figures — simply because the quality of reporting and record-keeping is so much higher on fatalities.

Mathematically, we can state that

p(death) = p(death | sick) * p(sick)

In math-speak, the vertical pipe “|” means “given”. So, in other words, this says that the probability of dying (mortality rate) is equal to the probability of dying given that you are infected (case-fatality rate) multiplied by the probability of getting infected (incidence-rate). Incidence measures the latter, i.e. the chance of getting sick. Another rate, which we didn’t mention is the morbidity rate, which would be the ideal measurement since it captures not just deaths, but also other features of diseases like permanent brain damage or paralysis. However, the reason it is never used is because the definition of morbidity is unwieldy and ambiguous.

The advantage of incidence

Vaccine advocates like to measure p(sick), theoretically at least, because it measures the prevalence of a disease more directly than measuring a specific outcome of disease like death. The inconsistency, however, is that a fall in p(sick) in the post-vaccine era doesn’t lead to a drop in p(death). In other words, death-rates are staying the same, which can only mean that case-fatality rates shown as the middle term in our equation are shooting through the roof. Here is a graph of case-fatality rates for Measles:

http://www.cdc.gov/vaccines/pubs/pinkbook/downloads/appendices/G/cases-deaths.pdf

http://www.cdc.gov/vaccines/pubs/pinkbook/downloads/appendices/G/cases-deaths.pdf

The y-axis is in percent, and as you can see the case-fatality rates more than doubled from the year the Measles vaccine was licensed in 1963 to 1970, which is easy to explain if there is a lot of under-reporting happening with case-statistics.

Another advantage for using incidence is if the chance of dying from the disease is so small and the number of true cases is also small, then it might be difficult to make strong inferences about the actual prevalence of a disease. With small sample sizes it is especially difficult to estimate the tail of a distribution.

To be fair, mortality rates share some of the draw-backs to which incidence falls victim. See The Questionable Contribution of Medical Measures to the Decline of Mortality in the US in the Twentieth Century, which points out that mortality rates are also subject to the fashion of the day or that there is a complex set of conditions (rather than just one disease) that results in death. Other issues include changes in disease classification, changes in registration. As the study points out, however, some of these errors will be averaged out when pooling or aggregating countries (and the same applies to incident data as well). If there are weaknesses in mortality data those weaknesses will only be amplified in incidence data and the former therefore has a strong relative advantage over incidence.

Incidentally Whimsical

Work is routinely completed to quantify the whimsical nature of case reporting and health departments call this “completeness of reporting“. Measles surveillance is complex since the patient must first seek health care, the diagnosis must be recognized by the physician, and finally the case must be reported to health departments.

CompletenessOfReporting

According to this study from the Journal of Infectious Diseases

Estimates of completeness of reporting from the 1980s and 1990s vary widely, from 3% to 58%. One study suggests that 85% of patients with measles sought health care, the proportion of compatible illnesses for which measles was considered varied from 13% to 75%, and the proportion of; suspected cases that were reported varied from 22% to 67%. Few cases were laboratory-confirmed, but all were reported. Surveillance in the United States is responsive, and its sensitivity likely increases when measles is circulating.

If you continue to read the paper, you will see that all disease-reporting requirements come from state laws and regulations. Reporting of measles is required in all states but national criteria for classifying measles fall into the following categories: 1) Suspected 2) Probable and 3) Confirmed and it is not clear or consistent with regard to which of these categories is reportable and when reporting should occur during the diagnostic process. Reporting is a passive process initiated by the reporter, there are no penalties for failure to report, reporting is often incomplete, and few diagnosis are actually laboratory tested and confirmed.

In J A Bean, et al; A comparison of national infection and immunization estimates for measles and rubella. Am J Public Health. 1979 June; 69(6): 611–612, they show how widely measles incidence varies from different data sources. In 1968 were there 200 cases or closer to a 1000 cases?

Estimated Number of Measles Cases by Year, 1966-1974

Estimated Number of Measles Cases by Year, 1966-1974

“The researchers conclude that there is little credence to the validity of any of the data sources and the CDC Morbidity and Mortality Weekly Report (MMWR) figures do not appear to be reliable national estimates.”

See Bader, M.; Communicable Diseases are Fraught with Variations; Am J Pub Health; 1979 June; 69(6): 611–612. PMCID: PMC1618974

In these papers it was estimated that the CDC was under-reporting incidence by a factor of 10 (i.e. only 1 in 10 true cases reported). I note that this is roughly by how much the measles cases dropped in the top left hand panel.

During “scares” overreporting according to UK Health Officials has been as high as 7,400% or 74 times actual lab-confirmed results (CDR Weekly, Volume 15 Number 12). 1,500% over-reporting shown in CDR Weekly, Volume 16 Number 12. Thanks to childhealthsafety.wordpress.com/graphs/ for pointing these out.

Conclusion

The difference in assessing the effectiveness of vaccines depending on whether we select incidence or mortality data is crystallizing. Vaccine advocates have argued that the reason death-rates are falling is because of medical advancements, like the iron lung, which offered paralytic polio patients better chances of survival. Furthermore, they would argue that mortality statistics confound the real prevalence of diseases that are better captured using incidence statistics. However, this doesn’t answer why case-fatality ratios actually increase following the introduction of vaccines. Therefore, epidemiological statistics contradict the medical advancement hypothesis. And the increase can easily be explained by under/over reporting together with the whims of fashionable diagnoses. Reporting of mild cases of a disease are especially subjective whereas reporting of mortality data is likely to have higher standards. Because cases proxy diagnoses, the evidence and scientific literature referenced here suggest that there is very little of a case to be made for curious-looking incidence data over death rates in the 20th Century and beyond.

About PD

PD is passionate about applying his background in math, statistics, and economics to apply new and interesting ideas about health, nutrition, and the incentives that drive products and the policies that surround them.