Get PJ Media on your Apple

Don’t Be Fooled: Evidence Supports School Choice

Big shock — the media are misrepresenting the research on vouchers again.

by
Greg Forster

Bio

June 21, 2008 - 12:56 am

Just as Congress begins debate on renewing the federally funded school voucher program in Washington, D.C., the latest findings from the official empirical evaluation of the program have been released. They’re being widely touted as proof that vouchers don’t benefit students. But that’s not what the study says. Big shock — the media are misrepresenting the research on vouchers again. And this isn’t the only study that matters; a large body of top-quality research consistently supports vouchers.

The study reports that students who were offered a voucher had higher test scores than a control group made up of students who applied for vouchers but were not offered them because they lost a random lottery to get into the program. However, the positive results were not statistically certain, meaning we cannot be highly confident that the voucher students’ higher test scores were really the result of vouchers and not a statistical fluke.

So the study shows that the voucher students did better. It just isn’t able to say with a high level of certainty that the vouchers are the reason they did better.

Adding insult to injury, the results just barely missed the conventional cutoff for reporting results with high confidence. The standard procedure is to report results if they are 95 percent certain. The results in this study were 91 percent certain. And this is the second year in a row the D.C. results have come close to the cutoff without reaching it.

The 95 percent cutoff is just a conventional practice, like giving people drivers’ licenses at age 16. Everyone understands that teenagers do not experience a miraculous transformation at midnight on their 16th birthdays. Similarly, there’s nothing magical about 95 percent certainty. Recognizing this, many researchers will report a result as “moderately” certain if it’s at least 90 percent certain.

Of course, we must respect the fact that 91 percent is not 95 percent. But that doesn’t mean we shouldn’t notice that this finding is moderately certain even if it isn’t completely certain. Moses did not come down from Mount Sinai with stone tablets that said “Thou shalt completely ignore any result that is less than 95 percent statistically certain.”

This whole issue is widely misunderstood because the standard scientific term for results we can be highly confident in is “statistically significant.” That’s a misleading phrase, because statistical significance doesn’t actually measure the “significance” of the finding. It measures how sure we are that the variables being looked at in the study caused the observed changes, as opposed to a mere statistical accident.

More important, this is not the only study to compare voucher students to a control group using a random lottery. It is actually the tenth such study. And if voucher critics are going to make such a big deal out of this study, they have no complaint coming if we look at the other nine studies as well.

In all ten of the random assignment studies, voucher students had higher test scores than the control group. And this positive result achieved at least 95 percent statistical certainty in eight of the ten studies.

This means school vouchers are better supported by top-quality empirical evidence than any other education policy. There is no other approach to education that has been studied with this kind of gold-standard empirical methodology and shown such consistently positive results.

And it’s also worth remembering that the D.C. study is ongoing. Several of the previous random-assignment studies of vouchers didn’t achieve statistical certainty at first, but did so in later years. The D.C. study may yet follow where they led, achieving 95 percent certainty later.

On that subject, it’s heartening to see the Washington Post editorial board get this study right, even if the paper’s news pages didn’t. While the reporters erroneously claim that voucher students “generally did no better on reading and math tests,” when in fact they did do better, the editors correctly characterize the study results as “promising” even if they’re “no slam dunk.” The editorial urges Congress not to pull the plug just because the statistical certainty of the data barely failed to reach an arbitrary benchmark.

Of course, there is much more to be said on the subject. Vouchers have lots of other benefits besides improving education for the students who use them. For example, participating parents report that their children are now in safer schools — a chronic problem in D.C. And the program has been shown to dramatically reduce racial segregation – another chronic problem in D.C.

But as far as this particular result is concerned, the main thing to bear in mind is that it’s 91 percent certain that vouchers helped these students learn more, and it’s 100 percent certain that the positive effects of vouchers have more empirical evidence to support them than any of the available alternatives.

Greg Forster is a senior fellow at the Foundation for Educational Choice.
Click here to view the 9 legacy comments

Comments are closed.