If you feel whipsawed about when and how often to treat patients at risk for prematurity with antenatal corticosteroids (ACS), you're not alone.
Use of ACS really began with a chance finding by Professor Sir Graham Liggins of the University of Auckland, New Zealand, nearly 40 years ago.1 His observation-in ewes-that exogenous corticosteroids induced parturition and also improved postnatal survival by enhancing fetal lung maturation led to the first randomized controlled trial (RCT) of ACS in humans.1 Four years later, Liggins and Howie found a lower rate of respiratory distress syndrome (RDS) in premature infants born to mothers who had been given ACS versus placebo (9% vs. 25.8%).2 The infants who benefited the most were those delivered 2 to 7 days after therapy (3.6% vs. 33.3%) and at 26 to 32 weeks' gestation (11.8% vs. 69.6%).
NIH takes the lead
Despite ACS's impressive benefits, obstetricians initially were slow to adopt the approach. To encourage use of the drugs in at-risk pregnancies, the National Institutes of Health (NIH) sponsored a Consensus Conference on the effect of ACS on perinatal outcomes in 1994.4 The conclusion: ACS reduced neonatal mortality, RDS, and IVH, and the benefits extended from 24 to 34 weeks. The panel also posited that while the greatest benefits accrued after 24 hours of treatment, therapy for less than 24 hours might improve outcomes. The panel recommended that:
Repeat courses take hold
The NIH panel's recommendation had the desired effect. Use of ACS to enhance fetal lung maturity began to skyrocket.5,6 Indeed, concern that the treatment's beneficial effects lasted only 7 days led more and more clinicians to repeat courses weekly in at-risk patients. Some clinicians even began giving ACS prophylactically to patients at particularly high risk, such as women with multifetal gestations.