Is clinical research overrated?

March 1, 2006

For all the talk about the importance of rigorous medical research and randomized double-blind trials, many physicians still put more faith in clinical experience than clinical experiment. Is it possible to strike a reasonable balance between the two?

Perhaps by now you've grown tired of hearing about the virtues of evidence-based medicine. In recent years, it seems every other issue of JAMA or Obstetrics & Gynecology has encouraged physicians to keep up with the latest double-blind, randomized controlled trial (RCT) or chided them for not putting those findings to good use.

One recent commentary, for instance, concluded that "Research that should change practice is often ignored for years."1 Claude Lenfant, MD, a prominent NIH scientist, after pointing out that the government has spent about $250 billion on medical research since 1950, asserted that "We in the US, both health providers and members of the public, are not applying what we know."2 While it's easy to sit in the ivy-covered tower and point a finger at hard-working practitioners in the trenches for their resistance to change or failure to keep up with the literature, in reality the issues are far more complex than that.

Prove to me that this research applies to my patients

When researchers design a large-scale RCT, they invariably establish a list of rigorous inclusion and exclusion criteria. If, for instance, a new treatment regimen for polycystic ovary syndrome (PCOS) is being evaluated, patients may be excluded if they have co-existing disorders like hypertension, diabetes, or abnormal liver function. Other patients may be left out of the trial because they are on other medications. Likewise, the inclusion criteria for the study may require that patients fall within a very strict set of diagnostic criteria, such as a narrow age range. And depending on the communities from which the subjects are drawn, they may be predominantly Caucasian or upper middle class or predominantly underserved minority patients; but rarely do study participants reflect a true cross-section of our practice populations. For example, many of the recent reports assessing risk factors for prematurity come from at-risk, underserved populations unlike those found in most private practices.4 Unfortunately, when a woman with PCOS or preterm contractions walks in the door, you don't have the luxury of screening out those who don't fit these inclusion and exclusion criteria.

Another shortcoming of RCTs is their failure to give an accurate picture of a management or treatment's adverse reaction profile. In one review of 192 drug trials, for instance, side effects and laboratory toxicology findings were reported in less than a third of the investigations.5 Similarly, the initial studies of the safety of trials of labor after prior cesarean delivery were conducted at tertiary care centers fully staffed with readily available residents, attending physicians, and anesthesiologists all capable of stat C-sections. This has almost no relevance to small, rural community hospitals.

Of course, there's still the other side of the coin: When several large, randomized, placebo-controlled trials-including those with diverse patient populations-conclude that a new treatment is safe and effective, there's little reason to believe it won't apply in community practice. A good example is the usefulness of intrapartum penicillin in group B streptococcus carriers.