Date of Graduation
Doctor of Philosophy in Education Policy (PhD)
Gary W. Ritter
Second Committee Member
Third Committee Member
Education, Accountability, Value added
Over the last twenty years, value added measures (VAMs) have proliferated in education research and policy. Whether applied to teachers, schools, or districts, VAMs have attempted to measure the contribution made by a unit of interest toward observed student outcomes, typically test scores in literacy and math. At the same time, a small number of states have developed methods to formally compare schools on those outcomes, and such methods may be used and intended in ways that qualify them as a kind of VAM. My primary interest is to evaluate the properties of a similar schools model I develop in comparison to three other VAMs. Using statewide student data and math test scores from from 2009 to 2014, I develop a similarity index for comparing schools based on observable student characteristics. Using the rank ordering of schools on this index, I then compare each school's mean math scores to the 15 schools immediately below it and the 15 schools immediately above it. Schools' rankings against their comparison groups are then considered as a VAM and compared to three other school effectiveness models: Student Growth Percentiles (SGP), Student Value Added (SVA), and Mean Prior Z (MPZ). The models are compared based on four properties: fairness, stability, validity, and transparency. I find that the Similar Schools Comparisons (SSC) model is more stable than SGP and SVA, but similar to MPZ. On fairness, defined as the strength of relationship between model results and schools' student demographics, SSC is fairer than the other three models, though all three show a weak overall relationship to demographics. On validity, defined as concurrence between the models, SSC aligns most closely with MPZ and has a modest relationship with SGP and SVA. On transparency for the public and educators, SSC is potentially valuable as it evaluates and compares schools in a highly visible way (ranking them against a known list of similar schools). Yet insofar as it relies on multiple regression to calculate the similarity index, SSC lacks transparency and requires specialized statistical knowledge. SSC is promising for exhibiting stability, fairness, and transparency, but further investigation is needed to determine its validity and proper interpretation in comparison to other VAMs.
Dean, J. R. (2015). Comparing Schools: From Value Added to Sound Policy. Theses and Dissertations Retrieved from https://scholarworks.uark.edu/etd/1139