In the February 2017 issue of Statistics and Public Policy, Senior Research Scientist Daniel Wright considers value-added models (VAMs) from the perspective of graphical models, identifying situations that are problematic for VAMs. In his article, “Using Graphical Models to Examine Value-Added Models,” Dr. Wright explores “whether the VAM estimates of school effectiveness are similar to the true school effectiveness, not to other estimates that may have similar biases to the VAM estimates to which they are compared. Simulation methods are used to allow access to true school effectiveness values for the model assumed and therefore allow assessment of the validity of the statistical model.”
Abstract
Value-added models (VAMs) of student test scores are used within education because they are supposed to measure school and teacher effectiveness well. Much research has compared VAM estimates for different models, with different measures (e.g., observation ratings), and in experimental designs. VAMs are considered here from the perspective of graphical models, and situations are identified that are problematic for VAMs. If the previous test scores are influenced by variables that also influence the true effectiveness of the school/teacher, and there are variables that influence both the previous and current test scores, then the estimates of effectiveness can be poor. Those using VAMs should consider the models that may give rise to their data and evaluate their methods for these models before using the results for high-stakes decisions.