Study Seeks to Determine Whether Ovarian Cancer Screenings Are Worth It

According to received wisdom, prevention is better than cure: but received wisdom, even when not altogether wrong, sometimes requires refinement. What might be true of individuals is not necessarily true of whole populations, and vice versa. Even after years of intense study, the correct but unsatisfying answer to the question of whether prevention is better than cure may well be, “It depends.”

The prognosis of ovarian cancer is relatively poor, with a 5-year survival rate of about 40 percent. It stands to reason that early detection might improve survival because of earlier detection and treatment; but what stands to reason is not always a good guide to reality.

A trial organized in Britain and reported in a recent edition of the Lancet tried to answer the question of whether annual screening of women between the ages of 50 and 70 by one of two different methods (blood test and ultrasound) would result in a reduced death rate from ovarian cancer. If nothing else it was a triumph of organization.

The investigators estimated that they needed a trial of 200,000 women to demonstrate a difference between experimental groups and controls. They contacted 1,243,202 women they thought might be eligible to take part in the trial, of whom 172,149 refused to participate and 776,626 did not respond. After exclusions, the investigators were able to find slightly more than the 200,000 they thought they needed. 50,640 were assigned at random to annual screening by blood test, 50,639 to ultrasound screening, and 101,359 to no screening at all. They followed them up for up to 14 years, remarkably few being lost in the process, only 1313 of the original 202,546.  

The results of this giant effort were equivocal, as they so often are. In total, 649 women died during the trial of ovarian cancer, 347 in the unscreened population and 302 in the screened population. The reduction in death rate among the screened group was about 15 percent, which failed to reach statistical significance. However, mortality in the screened groups after seven years began to diverge from that in the unscreened, and the authors concluded that, if they had waited longer, a greater and statistically significant divergence might have emerged.

If, however, it is accepted that women in the screened group had fewer deaths from cancer because they were screened, it took about 690,000 screening procedures to save 45 lives from cancer, that is to say 15,333 per life saved. The authors estimated by contrast that, in order to save one life from ovarian cancer, 641 women had to undergo screening for seven years. Was it worth it? This is a question that the authors, perhaps wisely, did not tackle, for the question depends itself on another question: worth it for whom?

One way of answering might be to ask potential subjects of a screening program how much they would be prepared to pay personally for such a result, but even this cannot give a satisfactory answer, for how much people would be willing to pay for a marginal result depends on their economic situation.

The authors also do not tackle the question of the generalizability of their study. The compliance of the screened women was very good, about 80 percent, which is to say that four-fifths of the scheduled screening procedures were actually carried out.  But they were a self-selected group who agreed to participate; only about a quarter of the women eligible for the trial took part in it, and it seems intuitively likely that their rate of compliance was higher than could possibly have been obtained from the other women in non-trial conditions.

Given that screening also resulted in 2000 operations on women with false positive screening results, the trial seems to me not to have been a giant step forward.