Fill out the brief form below for access to the free report.
Imagine visiting your doctor. Like most physicians, he measures your blood pressure. After he sees the results, he notices there might be a problem. So, he proposes that he will measure your blood pressure less often.
Wait, less often? Shouldn’t he be checking it more often? After all, there’s a possible problem.
No, he goes on to explain. Doctors are measuring blood pressure too often. He thinks blood pressure of his patients is high because of all that measuring. It will go down if it’s measured less.
A bizarre scenario? Sure, but it’s currently happening in education.
Some education organizations, legislators and national leaders say that annual state tests harm education. They want to measure less. In fact, annual state exams may only be needed every four years.
As with the blood pressure patient, that would be the wrong response. Evidence shows that annual tests actually help, not harm, education.
As an example, a review of 14 studies by education researchers David Figlio and Susannah Loeb show that annual tests and sanctions lead to higher scores. Studies also reveal that states, districts, and schools have responded to the data from tests. For example, they are focusing on subjects being tested and ensuring that curricula and instruction align with what is being tested.
Those shifts have led to complaints that exams are changing what’s happening in classrooms. Most of the evidence is only anecdotal, which means we don’t have any proven research to back the claims.
But, if that’s what is happening, is that so bad? Changing what happens in classrooms was the point of school accountability, which includes annual, objective exams. Nobody would suggest that policymakers try to improve education by reinforcing the status quo.
And let’s be clear. If we think testing less often will improve education, how will we know it did?
We can look at scores for fourth and eighth graders, which are the usual grades considered for testing, and compare them to scores from last year’s fourth and eighth graders.
But we are comparing different groups of students, and we won’t know how much their differences explain differences in scores. It’s like comparing this year’s students and last year’s students and observing that this year’s group is taller. Not much useful information there.
Now, we could wait four years and compare how students scored when they were fourth graders and how they scored later as eighth graders. If the results are higher, we could conclude that not testing in the interim at least did no harm.
But what if their scores were lower once they tested in eighth grade, or fell short of being college- or career-ready? Now we have a problem.
We will have learned that students are less prepared for high school. Expecting high schools to close possibly large gaps with only four years left before we want students to be ready for college and careers is a big gamble. High schools are not designed to be interventions of last resort for struggling students.
The fact is, annual, objective testing provides useful, instructive information.
*The information measures growth and progress during the crucial third-thru-eighth grade span, a time when students learn to comprehend what they are reading and develop math skills that they will continue to use in high school and college.
*Testing data enables fair assessments of teacher performance by measuring skills of a teacher’s class before the school year begins.
*The results enable parents, communities, and educators to know whether higher or lower performance levels can be attributed to performance of teachers and schools. By testing in multiple grades, the burden of testing is not focused entirely on a single grade level, such as fourth grade).
Of course, annual tests have to be based on state standards. Tests that are off the mark don’t help anyone. And the process by which tests are designed and what information is gained from them could be better explained to teachers and educators. Educators should think that “teaching to the test” is a positive rather than a negative.
Yet, like blood pressure measurements, we should not talk ourselves into believing that all is fine if we just test less often. Information can reveal problems. It doesn’t cause them. We need to be asking how we are addressing problems that the information tells us are there.
Mark Dynarski is a Bush Institute education consultant and president of Pemberton Research in New Jersey.
Mark Dynarski is founder and president of Pemberton Research, which focuses on understanding and utilizing research evidence in decision making. Previously, he was vice president and director of the Center for Improving Research Evidence at Mathematica Policy Research. He also previously served as director of the What Works Clearinghouse at the Institute of Education Sciences at the U.S. Department of Education, and as director and principal investigator of numerous education programs with a focus on at-risk children and youth. Currently he is a senior fellow (nonresident) at the Brown Center for Education Policy at the Brookings Institute.
Dynarski is an advisor to government agencies, philanthropies, and nonprofit organizations. He is well known for his expertise in econometrics and evaluation methodology, including the design, implementation, and analysis of evaluations of education programs using random assignment and quasi-experimental designs
Dynarski has published widely in peer-reviewed journals, including Educational Researcher, Educational Leadership, and Journal of Education for Students Placed at Risk. He is also on the editorial boards of Effective Education and The Elementary School Journal.
Dynarski earned an M.A. and Ph.D. in economics from the Johns Hopkins University and holds a B.A. in economics from the State University of New York at Geneseo. He also was a tenured professor of economics at the University of California, Davis, where he taught theory, statistics, and econometrics.Full Bio
Preparing All Kids for an Unpredictable Future
This essay, which draws from remarks that Bush Institute Education Reform Director Anne Wicks gave at the Bush Center's Forum on Leadership, appeared last week on The 74.
Forget the Edu-Wonks. NAEP Scores Should Get the Attention of Workforce Development Leaders
There is no shortage of buzz in the education policy world about the scores from the 2017 NAEP exam. But the people who really ought to be thinking about the results from the so-called “Nation’s Report Card” are the ones in charge of developing the workforce in a state or community.
Accountability Systems Need to be Simple Enough for Parents and the Public to Understand and Act Upon
What we need is a constant balancing of fairness and simplicity. This should be a primary goal for states like Texas now that the new Every Student Succeeds Act gives them more responsibility for holding schools accountable for their results.