Art by Peau Porotesano
Standardized tests: You couldn’t have gotten into Pepperdine without taking at least a few of them, and even before the dreaded SAT or ACT, there was a slew of standardized tests throughout primary and secondary school.
Standardized tests began in the U.S. “as the Industrial Revolution … took school-age kids out of the farms and factories and put them behind desks, standardized examinations emerged as an easy way to test large numbers of students quickly,” according to the TIME Magazine article “Standardized Testing” published Dec. 11, 2009 by Dan Fletcher.
As time went on, the SAT and ACT tests were created to test students entering college, then when No Child Left Behind was signed into law in 2001, the lucky students in grades 3 through 8 (yay us!) were required to take standardized tests every year in order to determine the quality of public education for all students, according to the PBS.org article “No Child Left Behind – The New Rules.”
The problem is, standardized tests aren’t an accurate measure of the quality of a student’s education, or even of a student’s intellect. These tests often show inherent biases. In a 2013 report from the Annie E. Casey Foundation titled “Early Warning Confirmed” describes how “researchers of the poverty/achievement connection have quantified the gap between children from low-income and wealthier families and tracked the gap’s growth over time.
An analysis of data from 19 nationally representative studies by Stanford University sociologist Sean Reardon found that the gap between children of families from the lowest and highest quartiles of socioeconomic status is more than one standard deviation on reading tests at kindergarten entry, an amount equal to roughly three to six years of learning in middle or high school.”
They also found that the relationship “between a family’s position in the income distribution and their children’s academic achievement has grown substantially stronger during the last half-century.”
In addition, these scores may not mean much scholastically speaking. Aspects of an application such as high school GPA are the main determinant of how well a student does at a university — not SAT scores. In fact, those scores don’t say much at all with regard to how well a student will do in school, according to a Feb. 18, 2014 PBS article “Do ACT and SAT scores really matter? New study says they shouldn’t” by Sarah Sheffer,
But what can be used in lieu of standardized tests? Despite all of its shortcomings, it gives a common bar of measure for schools across the country, and there currently isn’t much of a replacement system in place. In her Jan. 6, 2015 NPR article “What Schools Could Use Instead of Standardized Tests,” Anya Kamenetz discussed what else could be done. For one, states can have representative samples of the population take the standardized exams rather than every single student.
Secondly, the software that the large textbook corporations push out with their textbooks could be used to measure the improvement of students as a sort of “stealth assessment,” which will get rid of the anxiety that comes with taking one long standardized test. Thirdly, schools could take part in providing a wider range of measures outside of a single test, or like Scotland, schools can be inspected by government officials.
Lastly, it’s important to remember that standardized tests don’t measure intelligence. What they measure is how well a student can sit and take a test. They measure how well students can learn the tricks to beat the system. They place entire futures on one three to four-hour block of time. Maybe it’s time to just do away with them.
Follow the Pepperdine Graphic on Twitter: @peppgraphic