SSN Basic Facts

Evolving Evidence on School Voucher Effects

Policy field

Connect with the author

Indiana University
Indiana University-Bloomington

School vouchers have been embraced largely due to their potential to help students trapped in underperforming public schools, who would then have access to higher performing educational options. Indeed, many voucher programs are targeted at economically challenged students in under-performing urban schools who would otherwise be unable to afford tuition for presumably more effective private schools. So, how are voucher programs performing in producing positive student learning outcomes? A more sophisticated analysis shows that whether or not voucher programs outperform their public school counterparts is highly dependent on the size and scope of those programs—and increasing programs’ size and scope, as policymakers who wish to generate the benefit expected from voucher programs, may instead result in negative impacts for students.

“Vote Counting” Methodology and Voucher Data Representation

Researchers have been studying the question of whether children actually learn more since the first voucher program began in Milwaukee in the early 1990s. For the most part, proponents of voucher programs have highlighted findings from a set of studies that appear to show positive impacts for students whose families use vouchers. For example, in testimony to the U.S. Congress, one exhibit touted the findings from 12 small-scale studies, noting positive results for 13 subgroups, with no results showing voucher students falling behind. The pro-voucher group EdChoice regularly updates a list of studies on voucher impacts to indicate the efficacy of these programs, with the most recent showing that positive impacts on learning were far more frequent than any negative findings. 

However, such simplistic representations of the research evidence obscure important factors in understanding the effectiveness and potential of such programs. This “vote-counting” methodology compares the numbers of positive, null or negative findings with little regard for key factors, including:

  • Study size
  • Program characteristics (eligibility, caps, voucher amount)
  • Effect sizes overall, or for different subgroups
  • Trends in findings

Factors such as these are useful in understanding the analytical strength of different sets of studies, interpreting their usefulness, and illuminating patterns in the research.

Larger-Scale Studies Indicate More Nuanced—and More Negative—Effects of Voucher Programs

To provide a more nuanced and precise picture of the evidence on school voucher effects on learning, we examined the different studies of the impacts of these programs in the United States. Drawing from the list of studies “nominated” by voucher advocates as worthy of attention, we also included two more recent reports. These two rigorous studies, one in a leading peer-reviewed journal and the other commissioned by a pro-voucher organization,  were designed in ways that allow for causal inferences—that is, they allow for determining the effects of vouchers.

  

In this timeline, studies of voucher programs are arranged chronologically from left to right, showing the findings from each study of program effects on voucher students’ math achievement. (For the sake of brevity, we focus here on math, but also note that math is thought to be a better reflection of program effects since math, more so than reading, tends to be learned in school.) Results of each program studied are presented so that divergences from the baseline indicate the relative impacts on learning—essentially, how the students in each voucher program performed compared to control groups. Moreover, program size at the time of each study is represented by the width of each bar in the graph.

A number of important insights become available that were not apparent in earlier “vote-counting” representations of voucher effects: 

  • Almost all impacts in early studies tended to be modest, at best, but were also based on rather small programs, typically targeted to specific populations in specific urban areas.
  • As programs grew in size, the results turned negative, often to a large degree.
  • There was a clear correlation between program size and impacts, with smaller programs (and studies) showing modestly positive impacts, while larger programs had large, negative impacts. 

Of course, as with the vote-counting representations of voucher research, one should be aware that these programs and studies are not necessarily comparable, so caution is advisable when thinking about these findings as trends over time. For instance, these programs differed by scale, scope, eligibility, funding, demand, take-up and more, while the studies also differed by design, time-spans analyzed, and so forth. Still, this more nuanced view of voucher effects highlights clear concerns about the detrimental impacts of these programs on student learning as policymakers seek to expand these policies.

Information and data in this brief are drawn from the authors’ in-process research.