The incidence of breast cancer has been declining for more than a decade, perhaps as a result of the dramatic reduction in the use of postmenopausal therapy with combined estrogen and progestin. Far more significantly, breast cancer deaths have been declining for the past 25 years.1 This is likely due to an improved understanding of relevant cancer biology, better therapies, and advances in screening technologies and their utilization. However, the prevalence of breast cancer rises with age, making screening more efficacious in older women and lowering the positive predictive value (PPV) of screening in younger women. For example, routine mammograms have a PPV of 1.6% among women aged 40 to 44 versus 5.9% for women aged 60 to 64 years.2 Conversely, the higher rate of extremely dense breasts in younger women leads to lower mammographic screening sensitivity (73.4% for women aged 40 to 44 vs 84.7% for those aged 60 to 64 years).2 Thus, 1904 women in their 40s would need to be screened to prevent one breast cancer death, while only 377 women in their 60s would need to be screened to avoid one such death.3
Thus, while meta-analyses suggest that mammography reduces the relatively uncommon (< 1 per 1000) occurrence of breast cancer death among women aged 39 to 49 by around 15% (relative risk, 0.85 [95% credible interval, 0.75–0.96]; 8 trials),3 such screening is associated with substantially increased costs due to both false-positive results and overdiagnosis of lesions that would not necessarily lead to mortality. But just how much expense does such screening add to the health system? A recent study suggests far more than previously thought.