Manipulation in the Grading of New York's Regents Examinations

February 2011
|
Brian Jacob, Thomas Dee, Justin McCrary

The challenge of designing effective performance measurement and incentives is a general one in economic settings where behavior and outcomes are not easily observable. These issues are particularly prominent in education where, over the last two decades, test-based accountability systems for schools and students have proliferated. In this study, we present evidence that the design and decentralized, school-based grading of New York’s high-stakes Regents Examinations have led to pervasive manipulation of student test scores that are just below performance thresholds. Specifically, we document statistically significant discontinuities in the distributions of subject-specific Regent scores that align with the cut scores used to determine both student eligibility to graduate and school accountability. Our results suggest that roughly 3 to 5 percent of the exam scores that qualified for a high-school diploma actually had performance below the state requirements. Moreover, we find that the rates of test manipulation in NYC were roughly twice as high as those in the entire state. We estimate that roughly 6 to 10 percent of NYC students who scored above the passing threshold for a Regents Diploma actually had scores below the state requirement.

We would like to thank Tom McGinty and Barbara Martinez of the Wall Street Journal for bringing this issue to our attention and providing us with the data used in this analysis. We would also like to thank Don Boyd and Jim Wyckoff for helpful comments. All errors are our own.