Imagine visiting your doctor. Like most physicians, he measures your blood pressure. After he sees the results, he notices there might be a problem. So, he proposes that he will measure your blood pressure less often.
Wait, less often? Shouldn’t he be checking it more often? After all, there’s a possible problem.
No, he goes on to explain. Doctors are measuring blood pressure too often. He thinks blood pressure of his patients is high because of all that measuring. It will go down if it’s measured less.
A bizarre scenario? Sure, but it’s currently happening in education.
Some education organizations, legislators and national leaders say that annual state tests harm education. They want to measure less. In fact, annual state exams may only be needed every four years.
As with the blood pressure patient, that would be the wrong response. Evidence shows that annual tests actually help, not harm, education.
As an example, a review of 14 studies by education researchers David Figlio and Susannah Loeb show that annual tests and sanctions lead to higher scores. Studies also reveal that states, districts, and schools have responded to the data from tests. For example, they are focusing on subjects being tested and ensuring that curricula and instruction align with what is being tested.
Those shifts have led to complaints that exams are changing what’s happening in classrooms. Most of the evidence is only anecdotal, which means we don’t have any proven research to back the claims.
But, if that’s what is happening, is that so bad? Changing what happens in classrooms was the point of school accountability, which includes annual, objective exams. Nobody would suggest that policymakers try to improve education by reinforcing the status quo.
And let’s be clear. If we think testing less often will improve education, how will we know it did?
We can look at scores for fourth and eighth graders, which are the usual grades considered for testing, and compare them to scores from last year’s fourth and eighth graders.
But we are comparing different groups of students, and we won’t know how much their differences explain differences in scores. It’s like comparing this year’s students and last year’s students and observing that this year’s group is taller. Not much useful information there.
Now, we could wait four years and compare how students scored when they were fourth graders and how they scored later as eighth graders. If the results are higher, we could conclude that not testing in the interim at least did no harm.
But what if their scores were lower once they tested in eighth grade, or fell short of being college- or career-ready? Now we have a problem.
We will have learned that students are less prepared for high school. Expecting high schools to close possibly large gaps with only four years left before we want students to be ready for college and careers is a big gamble. High schools are not designed to be interventions of last resort for struggling students.
The fact is, annual, objective testing provides useful, instructive information.
*The information measures growth and progress during the crucial third-thru-eighth grade span, a time when students learn to comprehend what they are reading and develop math skills that they will continue to use in high school and college.
*Testing data enables fair assessments of teacher performance by measuring skills of a teacher’s class before the school year begins.
*The results enable parents, communities, and educators to know whether higher or lower performance levels can be attributed to performance of teachers and schools. By testing in multiple grades, the burden of testing is not focused entirely on a single grade level, such as fourth grade).
Of course, annual tests have to be based on state standards. Tests that are off the mark don’t help anyone. And the process by which tests are designed and what information is gained from them could be better explained to teachers and educators. Educators should think that “teaching to the test” is a positive rather than a negative.
Yet, like blood pressure measurements, we should not talk ourselves into believing that all is fine if we just test less often. Information can reveal problems. It doesn’t cause them. We need to be asking how we are addressing problems that the information tells us are there.
Mark Dynarski is a Bush Institute education consultant and president of Pemberton Research in New Jersey.
Mark Dynarski is founder and president of Pemberton Research, which focuses on understanding and utilizing research evidence in decision making. Previously, he was vice president and director of the Center for Improving Research Evidence at Mathematica Policy Research. He also previously served as director of the What Works Clearinghouse at the Institute of Education Sciences at the U.S. Department of Education, and as director and principal investigator of numerous education programs with a focus on at-risk children and youth. Currently he is a senior fellow (nonresident) at the Brown Center for Education Policy at the Brookings Institute.
Dynarski is an advisor to government agencies, philanthropies, and nonprofit organizations. He is well known for his expertise in econometrics and evaluation methodology, including the design, implementation, and analysis of evaluations of education programs using random assignment and quasi-experimental designs
Dynarski has published widely in peer-reviewed journals, including Educational Researcher, Educational Leadership, and Journal of Education for Students Placed at Risk. He is also on the editorial boards of Effective Education and The Elementary School Journal.
Dynarski earned an M.A. and Ph.D. in economics from the Johns Hopkins University and holds a B.A. in economics from the State University of New York at Geneseo. He also was a tenured professor of economics at the University of California, Davis, where he taught theory, statistics, and econometrics.Full Bio
Data From Annual Independent Tests Help Texas Students
The state bears the responsibility of ensuring a quality education. This role requires providing appropriate funding, but it also means knowing what is happening in classrooms across Texas. And that means having a common metric that produces comparable results.
Why Chronic Absenteeism Matters
The Bush Institute's Education Reform Initiative is working on chronic absenteeism because missing just 10% of school days – excused or not – puts students at risk to fall behind academically.
Clear, Understandable Accountability System Can Help Students Progress
One of the most important reasons for creating informative school accountability systems, if not the most important reason, is to track and use information to help students progress. Data captured via a transparent accountability system can help identify the right interventions for students. It could identify the need for a higher quality curriculum to help students better learn a subject. Or it could show whether students are on course for high school graduation and success in college or a career beyond 12th grade. Those of us who favor raising standards, assessing whether students are meeting those standards, and attaching some consequences to the results have too often assumed that our position was widely understood and appreciated. Wrong. We simply need to communicate better, especially with parents. This point has come through repeatedly in interviews the Bush Institute has been conducting this year with district and state leaders. Proponents of accounta