Fill out the brief form below for access to the free report.
None of us looked forward to taking our driver’s test. Yet none of us thinks doing away with the test is such a good idea. Passing the test requires having basic knowledge that society wants drivers to have. Education is similar. Having students demonstrate that they’ve learned is intrinsic to education.
Popular accounts in the media would have parents believe that in today’s classrooms kids stagger from one test to another. Parents need not worry.
Many classrooms today look a lot like classrooms have always looked: a group of about 20 to 25 kids led by a teacher through different subjects. And teachers spend most of their time teaching. Recent studies report that the amount of time spent on testing is modest.
It’s useful to unpack "tests." There are different tests for different purposes. There are tests that teachers give to monitor how students are learning, tests that states give to monitor how schools (and, in some places, teachers) are performing, and tests districts administer to also monitor how schools (and, in some places, teachers) are performing.
Tests that teachers give have a vital purpose in the classroom. Maybe parents believe some teachers over-do classroom tests. Just learn and enjoy learning, right?
Actually, some of the nation’s top cognitive scientists have shown that frequent testing improves learning. Henry Roediger and colleagues have studied memory and learning and found that if one simply studies and studies again, he or she does not master the material as well as if they study, are tested, then study again.
Being tested on what is supposed to be learned in the classroom helps learning more than just studying a lot. It is not an either-or. In fact, tests increase learning. Tests are good, not bad.
Then comes the annual state test. More tests! Well, they have their purpose, too.
They tell us whether what was being taught and tested all year actually stuck, whether students learned what they should have in that grade according to state standards. State tests are also an independent check – a way to see if what the school is teaching and testing is working.
This is one of the reasons state testing began. States wanted an objective way to know schools were doing as well as they said they were. And, in most states, the state government is the primary funder of schools. Measuring how schools are performing is consistent with effective government.
At first, testing occurred maybe once in elementary school, once in the middle grades and maybe once in high school. But schools complained, and rightly so, that everything became focused on the grade that was tested.
The best teachers were sent to tested grades and there was not comparable information for other grades to know how they were doing too. Even worse, there was no way to have independent information on the gains students made each year.
Did students in one class or school gain more than students in another class or school? Did the fifth grade do well last year? In schools where students were far behind, did those students make gains that will help them catch up?
Unlike local school tests, annual state tests must meet scientific standards, which are laid out in detail in a handbook prepared by three professional organizations. The tests are designed by teams working with state agencies and include significant input from teachers in the state. And state tests are "standardized," a term that has a specific meaning when used to describe a test. Students are asked the same questions under similar testing conditions.
Standardization makes a big difference. Because of it, annual test scores can be compared between schools or districts.
A school that does better than another school on the annual test is doing better, period. Poverty levels, English-speaking ability, numbers of students with various disabilities or other factors may explain why scores are different, but the difference is real.
The problem that annual tests solve for parents is that teachers can differ in their test and grading approaches. A parent whose child is getting an A in math might wonder if their child really is achieving at the level suggested by that grade. The A grade given by their child’s teacher may only be equivalent to average math proficiency.
This is more than hypothetical. The U.S. Department of Education’s massive "Prospects" study in the 1990s found exactly this when it compared test scores from a standardized test to student grades in high and low poverty schools.
Current policy requires an annual standardized assessment in grades three-thru-eight and in one high-school grade. It also requires consequences if students are not making adequate gains.
Researchers have studied whether consequences matter for school performance. The answer is that consequences matter. Studies showed that test scores increased more in states that had consequences than in states that did not (before 2002 states could choose their own consequences).
What is surprising is that research has shown schools improve even before they hit consequences. The desire to not hit them spurs improvement.
Suppose your town posts a speed limit sign, and right below that sign posts a second sign saying "speed limits will not be enforced." Some drivers will slow down, others won’t. Now remove the second sign. Will more drivers slow down? Probably. Speeding now has a consequence.
A third kind of test has emerged in the last decade, called a benchmark test. Its length can vary and it is often administered on a computer. It provides teachers with rapid feedback about whether their students have acquired skills. That helps teachers focus on students and topics that need it.
Benchmark tests occupy a place between a teacher test and an annual state test. They are a way for teachers to test whether their students are mastering material without having to develop their own exams, and they provide information to help teachers see how their students are doing compared to other students.
Whether benchmark tests predict how well students will score on the annual state test is questionable. One study in the mid-Atlantic region reported that it could find little information on how various benchmark tests predicted state tests.
We don’t expect parts for Chevrolets to fit in Fords, though they might. That happens with tests too. The same benchmark test used by school districts in, say, Pennsylvania and in New Jersey might predict one state’s test and not the other.
A study of whether using benchmark tests improved scores on the Massachusetts state test found they made no difference. We can speculate on why this is so.
Maybe tests that teachers already were giving had the same effect on learning and made the benchmark tests redundant. Or maybe the benchmark test did not test what was being tested by the state.
The benchmark test might have had questions about adding and subtracting integers, for example, and the state test had questions about fractions. Either way, evidence for using benchmark tests is lacking.
It’s easy for parents to confuse annual tests and benchmark tests. The tests share some features, such as being created by professional test designers using the scientific standards mentioned above.
There are crucial differences, though. Districts choose their own benchmark tests (neighboring districts might use different ones), scores are not public, and there are no consequences for scoring poorly on them.
Parents might think their child is being tested frequently, and might think it has to do with the annual test. But annual tests are only a few hours during one week of the year.
If a parent thinks their child is being tested a lot, benchmark tests probably are the reason. A recent study of 14 school districts reported that tests required by those districts were far more numerous than tests required by states.
How often benchmark tests are given is up to districts and even individual teachers to decide. Districts anxious about annual tests might require benchmark tests to be administered frequently. Teachers anxious about meeting their "student growth objectives" might administer benchmark tests frequently, even if their district does not require them to.
TOO MUCH TESTING?
Is there too much testing? Well, do we want fewer classroom tests? No, that’s really up to teachers to test when they are needed and there is research showing more classroom tests will improve learning.
Do we want to do away with annual tests? No, these are central to understanding how our students are performing and how governments are using our tax dollars. Annual tests give parents an independent and objective basis to judge their schools and to know whether their child is at grade level or above or below it.
Do we want fewer benchmark tests? These tests may help teachers focus their efforts, and they may help districts identify ineffective teachers and effective teachers. But are benchmark tests being over-used? Do they have to be done so often? That’s where discussion should focus, because the answer is, possibly.
Are tests bad? No, they help to improve and measure learning. Can they be over-used? Sure. But let’s not rush to judgment about an issue that is central to education.
Mark Dynarski is President of Pemberton Research in New Jersey and a Bush Institute Education Fellow.
Mark Dynarski is founder and president of Pemberton Research, which focuses on understanding and utilizing research evidence in decision making. Previously, he was vice president and director of the Center for Improving Research Evidence at Mathematica Policy Research. He also previously served as director of the What Works Clearinghouse at the Institute of Education Sciences at the U.S. Department of Education, and as director and principal investigator of numerous education programs with a focus on at-risk children and youth. Currently he is a senior fellow (nonresident) at the Brown Center for Education Policy at the Brookings Institute.
Dynarski is an advisor to government agencies, philanthropies, and nonprofit organizations. He is well known for his expertise in econometrics and evaluation methodology, including the design, implementation, and analysis of evaluations of education programs using random assignment and quasi-experimental designs
Dynarski has published widely in peer-reviewed journals, including Educational Researcher, Educational Leadership, and Journal of Education for Students Placed at Risk. He is also on the editorial boards of Effective Education and The Elementary School Journal.
Dynarski earned an M.A. and Ph.D. in economics from the Johns Hopkins University and holds a B.A. in economics from the State University of New York at Geneseo. He also was a tenured professor of economics at the University of California, Davis, where he taught theory, statistics, and econometrics.Full Bio
Preparing All Kids for an Unpredictable Future
This essay, which draws from remarks that Bush Institute Education Reform Director Anne Wicks gave at the Bush Center's Forum on Leadership, appeared last week on The 74.
Forget the Edu-Wonks. NAEP Scores Should Get the Attention of Workforce Development Leaders
There is no shortage of buzz in the education policy world about the scores from the 2017 NAEP exam. But the people who really ought to be thinking about the results from the so-called “Nation’s Report Card” are the ones in charge of developing the workforce in a state or community.
Accountability Systems Need to be Simple Enough for Parents and the Public to Understand and Act Upon
What we need is a constant balancing of fairness and simplicity. This should be a primary goal for states like Texas now that the new Every Student Succeeds Act gives them more responsibility for holding schools accountable for their results.