When a test is not using a fixed form or fixed set of questions but instead adjusts based on input from the test-taker, it is computer-adaptive. The idea is that the questions change based on the students’ responses as they are taking the test.

In practice, there are many different types of computer-adaptive tests. The SBAC, a state standards proficiency assessment used by many states for accountability testing of their students, is computer-adaptive.  It uses adaptive logic to decrease overall test time by reducing questions in certain areas.

Other assessments may use computer-adaptive logic in order to improve their overall objective.  However, just because a test is computer-adaptive doesn’t mean it is better. Short assessments, for example, may be computer-adaptive, but if they are defined as “screeners” or “standards-proficiency” tests, they will not be able to diagnose a student in a particular area or subject with greater accuracy.

For over 20 years, Let's Go Learn has been offering best-in-class, personalized assessments in reading and math. See for yourself!

More info...
More info...

Assessment scores indicate the type of test

Look at the end scoring of an assessment to judge what it does.

Examples:
67% ← Percentile scores rank students relative to others. This is NOT diagnostic. Relative scores don’t tell a teacher what to do next with a student.
780 ← Scaled scores provide a ranking within a subject (such as reading, numbers and operations, total math, etc.) that is typically a summative measure. These are NOT diagnostic.
340 Lexile ← This is a readability score. It reveals the level at which a student can read and answer questions correctly. This is not diagnostic because it doesn’t tell you why.
3.5 gls ← If this is assigned to a broad category such as “Math” or “Reading,” it is NOT diagnostic. It doesn’t tell you why a student is at this level. However, if tied to a specific sub-test of a subject such as “Phonics,” “Place Value,” “High-Frequency Words,” or “Multiplication of Whole Numbers,” then it is diagnostic, since each grade-level score will be linked to specific instructional content. For example, at mid-third grade, a teacher would typically teach “thousands, ten-thousands, and hundred-thousands place value” within the math sub-test of Place Value. Grade-level scores are typically criterion-referenced and tied to externally fixed skills or concepts.

Early pioneers of computer-adaptive testing in education

The DORA assessment by Let’s Go Learn launched in September 2001. It was arguably the first true computer-adaptive assessment that was also diagnostic. It operated over the Internet using a 28.8k modem. To put this in perspective, today we measure our high-bandwidth Internet in MB or megabits, which is 1000 times faster.  So 28.8k is 0.028 MB.

Rapid Naming UI Screen

DORA assessed students in reading.  It started with decoding and examined high-frequency words.  If a student’s High-Frequency Word grade-level score was low, it then transitioned to a lower level for Word Recognition and then into Phonics.  Ultimately, based on earlier scores, a comprehension passage level was selected. In this manner, even students who struggle with reading are not overwhelmed by the DORA assessment. DORA uses computer-adaptive technology to reduce test time and decrease student frustration.  This leads to a better and more accurate diagnostic assessment.

In 2005, the DOMA Pre-Algebra and DOMA Algebra assessments were launched by Let’s Go Learn.  And in 2010, the ADAM assessment was also launched by Let’s Go Learn in the area of K-7/8 foundational mathematics.  These assessments are all highly diagnostic, using computer-adaptive technologies to reduce test-taking time and student frustration and improve diagnostic accuracy.