Tuesday, March 24, 2009
Reading- The Importance of Content Specific Assessment
An important editorial by E.D. Hirsch in the New York Times titled "Reading Test Dummies" looks at how we measure reading achievement in our nation's schools (including here in Hoboken). For all the discussion about the need for more responsive tests to measure school performance and student learning, policymakers at the district, state and federal levels often overlook a critical shortcoming of existing reading assessments: the content on them is totally disconnected from the vocabulary and content children actually learn in school.
A 1988 study indicated why this improvement in testing should be instituted. Experimenters separated seventh- and eighth-grade students into two groups — strong and weak readers as measured by standard reading tests. The students in each group were subdivided according to their baseball knowledge. Then they were all given a reading test with passages about baseball. Low-level readers with high baseball knowledge significantly outperformed strong readers with little background knowledge. The experiment confirmed what language researchers have long maintained: the key to comprehension is familiarity with the relevant subject. For a student with a basic ability to decode print, a reading-comprehension test is not chiefly a test of formal techniques but a test of background knowledge.
As Hirsch write: The problem is that the reading passages used in these tests are random. They are not aligned with explicit grade-by-grade content standards. Children are asked to read and then answer multiple-choice questions about such topics as taking a hike in the Appalachians even though they’ve never left the sidewalks of New York, nor studied the Appalachians in school.
He continues: Teachers can’t prepare for the content of the tests and so they substitute practice exams and countless hours of instruction in comprehension strategies like “finding the main idea.” Yet despite this intensive test preparation, reading scores have paradoxically stagnated or declined in the later grades.
Why is this? Primarily it is because the schools have imagined that reading is merely a “skill” that can be transferred from one reading passage to another, and that reading scores can be raised by having young students endlessly practice strategies on trivial stories. Tragic amounts of time have been wasted that could have been devoted to enhancing knowledge and vocabulary, which would actually raise reading comprehension scores.
Research on the importance of content and context (some of which I conducted with colleagues at Vanderbilt University) argues that we could improve both reading assessment and reading instruction if we replaced the current ill-informed model with reading assessments in which the passages students are asked to read and analyze focus on topics that are aligned with the curriculum that children actually study in literature, science, social studies, the arts, and other subject areas in each grade.
Doing this would also essentially force states to improve the quality of their state standards in these subject areas so that they provided more useful information to teachers about what students are expected to learn in each grade. We are not there yet and it is not clear if we will get there anytime soon but aspects of the revised curriculum will take this into account when it does not come in direct conflict with state standards.
I would encourage you to read the entire piece entitled Reading Test Dummies by E. D. Hirsch by clicking HERE.
A 1988 study indicated why this improvement in testing should be instituted. Experimenters separated seventh- and eighth-grade students into two groups — strong and weak readers as measured by standard reading tests. The students in each group were subdivided according to their baseball knowledge. Then they were all given a reading test with passages about baseball. Low-level readers with high baseball knowledge significantly outperformed strong readers with little background knowledge. The experiment confirmed what language researchers have long maintained: the key to comprehension is familiarity with the relevant subject. For a student with a basic ability to decode print, a reading-comprehension test is not chiefly a test of formal techniques but a test of background knowledge.
As Hirsch write: The problem is that the reading passages used in these tests are random. They are not aligned with explicit grade-by-grade content standards. Children are asked to read and then answer multiple-choice questions about such topics as taking a hike in the Appalachians even though they’ve never left the sidewalks of New York, nor studied the Appalachians in school.
He continues: Teachers can’t prepare for the content of the tests and so they substitute practice exams and countless hours of instruction in comprehension strategies like “finding the main idea.” Yet despite this intensive test preparation, reading scores have paradoxically stagnated or declined in the later grades.
Why is this? Primarily it is because the schools have imagined that reading is merely a “skill” that can be transferred from one reading passage to another, and that reading scores can be raised by having young students endlessly practice strategies on trivial stories. Tragic amounts of time have been wasted that could have been devoted to enhancing knowledge and vocabulary, which would actually raise reading comprehension scores.
Research on the importance of content and context (some of which I conducted with colleagues at Vanderbilt University) argues that we could improve both reading assessment and reading instruction if we replaced the current ill-informed model with reading assessments in which the passages students are asked to read and analyze focus on topics that are aligned with the curriculum that children actually study in literature, science, social studies, the arts, and other subject areas in each grade.
Doing this would also essentially force states to improve the quality of their state standards in these subject areas so that they provided more useful information to teachers about what students are expected to learn in each grade. We are not there yet and it is not clear if we will get there anytime soon but aspects of the revised curriculum will take this into account when it does not come in direct conflict with state standards.
I would encourage you to read the entire piece entitled Reading Test Dummies by E. D. Hirsch by clicking HERE.
Subscribe to:
Post Comments (Atom)
1 comment:
Diagnostic assessments are essential instructional tools for effective English-language Arts and reading teachers. However, many teachers resist using these tools because they can be time-consuming to administer, grade, record, and analyze. Some teachers avoid diagnostic assessments because these teachers exclusively focus on grade-level standards-based instruction or believe that remediation is (or was) the job of some other teacher. To be honest, some teachers resist diagnostic assessments because the data might induce them to differentiate instruction—a daunting task for any teacher. And some teachers resist diagnostic assessments because they fear that the data will be used by administrators to hold them accountable for individual student progress. Check out ten criteria for effective diagnostic ELA/reading assessments at http://penningtonpublishing.com/blog/reading/ten-criteria-for-effective-elareading-diagnostic-assessments/ and download free whole-class comprehensive consonant and vowel phonics assessments, three sight word assessments, a spelling-pattern assessment, a multi-level fluency assessment, six phonemic awareness assessments, a grammar assessment, and a mechanics assessment from the right column of this informative article.
Post a Comment