Many factors affect a student's ability to successfully comprehend a text. Some students struggle with decoding the text they encounter or with the language structures (i.e., phrases and idioms) used. Other students may possess limited background knowledge about the topic of the text or they may not be interested in what they're reading. While Let's Go Learn's comprehension test presents students with non-fiction topics that they are likely to have encountered in school, some groups of students may have less familiarity with the subject matter in DORA than in other comprehension assessments.
Another factor that can make scores on DORA seem lower is if your students have been tested using traditional teacher-mediated, pen-and-paper assessments. On these assessments there is larger room for discrepancy, as teachers often ask follow-up questions to clarify students' responses and students often become familiar with the administration protocol. Let's Go Learn's DORA removes some of this variability often associated with teacher-mediated assessments.
Also, because DORA is criterion-referenced--that is, based on a set of criteria identified by experts--it is possible that the items might differ from other criterion-referenced assessments you may have encountered. This does not preclude the utility or meaningful information produced by DORA's comprehension sub-test. It just means that one must consider its difficulty relative to other available comprehension tests.
The avoidance of false positives, as mentioned in the previous question, is also a factor that can make scores appear lower. If other comprehension measures used in the past have a lower degree of false positive aversion, then the difference when comparing DORA to this other measure may appear significant. Our philosophy is that it is worth it to avoid incorrectly labeling a low comprehension student as high, even if it means on occasion labeling a high comprehension student as slightly lower than his or her real ability. And have no doubt, comprehension measures must choose one or the other possibility. There is no way to avoid biases.
One final factor that should be considered is the student's motivation. Longer assessments do run a higher risk of fatiguing the student. And the factor that causes the greatest test score variance is student motivation. Therefore, students need to be properly introduced to the idea of DORA. Teachers should stress that this assessment will help them do a better job of instructing the students. Also, the assessment should be broken up into manageable sessions and students should be monitored during testing. If some students seem fatigued, the teacher should consider stopping the assessment and resuming it later.
In summary, many factors might make it appear, on occasion, that students' scores on DORA's Silent Reading sub-test are incorrectly lower than their reading ability compared to other reading measures. However, when examining the biases of each measure and interpreting DORA for what is seeks to do, these discrepancies, if any, can usually be explained or accounted for. Furthermore, there is low probability that any discrepancy between measures will be large enough to negatively affect any particular student's instructional plan.
Tags: Why do the Silent Reading sub-test scores on DORA seem low for my students?