"Growth models:" an idea that's not so new
Apparently, common sense is breaking out in up-state New York. From the New York Times' Winnie Hu:
The Cohoes city school district, outside Albany, is considering a gifted program for elementary students and adding college-level courses after discovering that its top students improved less on standardized tests in the past two years than everyone else in the district.
In Ardsley, N.Y., a Westchester County suburb, administrators intend to place more special education students in regular classes after seeing their standardized test scores rise in the last year.
And as the New York City Department of Education begins grading each public school A to F for the first time this fall, more than half the evaluation will be based on how individual students progress on standardized tests.
All three changes resulted from an increasingly popular way of analyzing test scores, called a “growth model” because it tracks the progress of students as they move from grade to grade rather than comparing, say, this year’s fourth graders with last year’s, the traditional approach.
Concerned that the traditional way amounted to an apples-to-oranges comparison, schools in more than two dozen states have turned to growth models. Now a movement is mounting to amend the federal No Child Left Behind Act, which is up for reauthorization this year, to allow such alternative assessments of student progress.
Many urban educators contend that growth models are a fairer measure because they recognize that poor and minority students often start out behind, and thus have more to learn to reach state standards. At the same time, many school officials in affluent suburbs favor growth models because they evaluate students at all levels rather than focusing on lifting those at the bottom, thereby helping to justify instruction costs to parents and school boards at a time of shrinking budgets.
Adding growth models as a way to satisfy federal requirements to demonstrate “adequate yearly progress” could make it easier for some schools to avoid penalties because they would receive credit for students who improve performance but still fall below proficiency levels. It could also increase pressure on high-performing schools that sail above state standards to prove that their students are continuing to advance.
Federal education officials agreed in 2005 to a pilot program allowing up to 10 states to experiment with growth models, but emphasized that they remained responsible for ensuring that all students would reach reading and math standards by 2014, and show consistent gains along the way. Seven states — North Carolina, Tennessee, Arkansas, Delaware, Ohio, Florida and Iowa — have joined the pilot so far, federal officials said, and on Tuesday, the Education Department green-lighted Alaska and Arizona to use growth models to analyze data from the 2006-7 school year.
“A growth model is a way for states that are already raising achievement and following the bright-line principles of the law to strengthen accountability,” Margaret Spellings, the secretary of education, said in a statement. “We are open to new ideas, but when it comes to accountability, we are not taking our eye off the ball.”
And there's more to read.
You know, I could SWEAR that the idea of a "growth model" is far from a "new idea," in the words of the Queen of Charts. Hmmm, when have I seen that before?
It'll come to me....
Oh yes, during the many previous years of my educational career, including my time as a student. I still remember my and my classmates' percentile scores in reading, science, social studies, and math being compared every year to see if we really WERE actually making improvement (and strangely, they also told us our IQ scores, something I hear that isn't done any more.) I would have thought that it was absurd if my scores had been compared with the scores of the kids younger or older than my class. Because, you know, I worked hard for those scores, and they were DIFFERENT PEOPLE. Some of whom spent their days NOT reading. Some of whom spent their days smoking things.
Also known as the "longitudinal study" or "panel study"in the world of the social sciences, it's a fancy name for what is just common sense. Which is, of course, sadly lacking in much of the NCLB Act.
Labels: high-stakes testing, NCLB
1 Comments:
Is there anything new in education? We are the best recyclers ever!
This is common sense -- who hasn't wrung their hands over the dip in scores from a weak class taking the MAP the year after a really strong class? We actually do look at individual scores from year to year as a district, but the big changes in the MAP last year kind of made that much more difficult (not that you get the scores in any kind of time frame to make them useful).
(I wonder how the new exit exams are going to look... spending my summer mapping my courses to prepare.)
Post a Comment
<< Home