Apodaca: Color codes and numbers don’t tell a school’s story
- Share via
Mid-summer is typically a time when my head hurts if I try to think too hard. And boy, did my head feel like exploding when I started to examine the state’s proposed new color-coded system for grading public schools.
The idea is meant to replace the old, unpopular method ushered in under the federal No Child Left Behind Act, which was abandoned by California a few years ago. Under that system, schools were assigned a numeric value based on standardized test scores. It was called the Academic Performance Index, or API.
Critics of the API argued, among other complaints, that a single score for each school based solely on testing was unfair and overly simplistic. Evaluating a school’s effectiveness and levels of improvement over time requires a more holistic, balanced, and comprehensive system, critics maintained.
Now with the Every Student Succeeds Act, states are still required to evaluate schools annually, but they are urged to do so in a more in-depth fashion. California is seeking to establish a system that goes beyond relying almost exclusively on test scores and instead present a more detailed and meaningful assessment of a school’s accomplishments and challenges.
The new proposal, called “The California Model,” was designed by the nonprofit consulting firm WestEd. It features a grid of 17 colored boxes for every school. Each box is assigned a specific category, such as math or English proficiency, graduation rate, absenteeism, college and career readiness, and parental engagement. While many of the categories seem straightforward enough, a few have arguably tenuous links to educational outcomes (suspensions), while others appear a bit vague (“school climate”).
Various colors represent different levels of achievement, with separate boxes indicating the current status of each category and yet more showing whether schools are improving in those areas or getting worse. Green is good, red is bad, and lots of other colors indicate different mixtures of goodness and badness.
Are you following along so far?
Indeed, the proposed new system gives us a more detailed look, but in doing so it moves us from being too simplistic to so complicated that it requires consulting a separate key, with yet another grid of 25 boxes, in order to figure out the specific meaning of each box on a school’s chart. This might be progress, but in going from a mind-numbingly shallow and misleading index to a tortuous labyrinth, it won’t be an easy change to embrace.
It also points to the problems inherent in the concept of grading schools in the first place.
First, as we can see with this new attempt, it’s really hard to find a means to accurately depict how our schools are actually doing. Yes, this proposed “California Model” is more complex and thoughtful, but it could also produce as unsatisfying and incomplete a picture of school health as any other system. No matter how many “objective” measures are included, it’s simply not possible to capture the myriad challenges and unique features of individual schools in one neat, easy-to-understand package.
Another issue that’s impossible to ignore is that any grading system, no matter how carefully crafted, will be vulnerable to manipulation. Just as the API induced schools to take measures to improve their test scores without making any substantive academic progress — and in some cases even resorting to unethical steps to boost numbers — the proposed grading method will undoubtedly include some carefully massaged results. It’s just the nature of the beast.
For all the good intentions and sincere attempts to give communities a clearer picture of how their schools are doing, I doubt anyone not intimately familiar with the nuts and bolts of education will easily sort through the most relevant information. Will parents looking at this new system be able to answer with confidence the most fundamental question as to whether a school is doing a good job educating students? It’s doubtful.
Many observers have called for the California Board of Education to reject this new proposal and develop a system that lies somewhere between the one-dimensional API and the elaborate California Model, one that features more measures of progress than test scores but doesn’t require code-cracking skills to decipher.
That would probably be the best outcome we could hope for at this point. But whatever system is settled upon, keep in mind that no method of grading schools will be wholly fair or satisfying. So many factors go into making education work — or not — that we’ll never be able to capture them all in a single assessment system.
More to the point, the challenges faced by schools are incredibly varied and deep-rooted. We know that entrenched societal issues such as poverty and hunger are the greatest enemies of educational effectiveness. But it simply isn’t possible to show all the ways that schools and individual teachers fight to improve the prospects of their students from impoverished backgrounds.
That’s not to say we shouldn’t try, or that schools should not be held accountable. But any assessment system much be considered in context and with the intent not to rebuke educators for shortcomings or boast over high rankings. Instead, we must use the information gathered to diagnose and then target resources at schools and educational trouble spots that need help to do a better job.
--
PATRICE APODACA is a former Newport-Mesa public school parent and former Los Angeles Times staff writer. She lives in Newport Beach.