A Note from Dr. Richard McCallum and Definition of Terms
A Note from Dr. Richard McCallum and Definition of Terms
Let's Go Learn was founded on the belief that timely and accurate assessment data is a key component of successful learning. This fact is especially true in reading: parents and teachers need both diagnostic and on-going assessment data to make effective instructional decisions for students. The goal of the LGL Reading Assessment is to bring the best practices in literacy assessment into an intelligent online application. To achieve this goal we began to assemble a set of reading instruments, delivered online, that will provide: 1) individualized assessment data in reading, and 2) a management system for the reporting and analysis of students' scores.
The most recent version of the LGL Reading Assessment provides an online tool for collecting information that might normally be collected by a teacher or specialist using informal reading inventories, word lists, reading passages, and other classroom-based diagnostic measures of reading ability. Our goal is to utilize the strengths of online technology to get individualized diagnostic assessment data into the hands of educators. Our attempts at developing such tools have been warmly received by parents, educators, and administrators in schools.
There are several distinct advantages for teachers using the LGL Reading Assessment. First, teachers save time using this tool. Collecting individualized assessment data is time consuming; teachers will tell you that the time commitment alone is enough to argue against collecting such data. Second, when students are assessed in an online environment, no data is lost. That is, when a teacher or specialist assesses a child, subtle patterns in his or her behavior may be lost if the assessor is not highly trained and aware of the many nuances involved. In the LGL Reading Assessment, however, the thoughtful design of the test items and the database structure allow for all test data to be captured. For example, in the word analysis subtest in the LGL Reading Assessment, distracters were chosen with several key variables in mind including the nature of the sound pattern and its position in the word. Over the course of a subtest this information can be used to identify subtle patterns in the student's responses within the measure.
Educational Expertise of Dr. Richard McCallum, Co-Founder of Let's Go Learn
Let's Go Learn was co-founded by Richard McCallum, Ph.D. For eight years, Dr. McCallum was the Academic Coordinator for the Advanced Reading and Language Leadership Program in the Graduate School of Education at the University of California, Berkeley. In Dr. McCallum's program, graduate students earned Master's degrees in Reading Education and California teaching credentials as reading specialists. In addition to the course work required for the degree, Dr. McCallum's graduate students received extensive field training through CAL Reads, a nationally recognized school-site intervention program in reading.
CAL Reads provides individualized one-on-one tutoring for low achieving elementary, intermediate, and high school students. As is the case with all effective intervention programs, CAL Reads administers individualized diagnostic reading assessments for all students served by the program. Based on these measures, an individual literacy profile is developed for every student. This profile provides the instructional roadmap for individualized reading remediation.
CAL Reads succeeds, in part, because the program collects both diagnostic and on-going assessment data on students. This detailed information is essential if we are to bring students' reading abilities up to grade level. Unfortunately, parents and classroom teachers are not in a position to collect the type of assessment data a reading specialist or intervention program might utilize. For this reason, Richard McCallum and a small group of other experts in education and web-based business technology founded Let's Go Learn.
Validity - An assessment instrument is valid to the extent that it actually assesses the underlying skill or construct it is designed to assess. A properly calibrated postage scale, for example, is a valid means of assessing how much an envelope weighs. But assessing the component skills underlying a complex phenomenon like reading is much more difficult. The difference is that weight is a directly observable feature of physical reality, whereas reading skills are latent (not directly observable) traits within a person's mind. The validity of an instrument designed to assess such latent traits includes (1) Construct Validity: The theoretical connection between the instrument and the skill to be assessed, provided by the experts in the field who create the instrument, and (2) Criterion Validity: The empirical connection between performance on the instrument and other outcomes recognized as correlates of the skill to be assessed, such as correlation with other assessment instruments or relevant outcomes. The Let's Go Learn reading assessment subtests derive their construct validity from the same techniques and content as the CAL Reads diagnostic assessments. LGL's criterion validity is established by its high correlation with scores on CAL Reads assessments and other nationally normed commercial assessments.
Reliability - An assessment is reliable to the extent that its results are consistent over repeated administrations. Reliability is a necessary condition for an instrument to be valid. A perfectly valid and reliable instrument will give the same score over and over when assessing the same person in the same skill state. In reality, however, repeated assessments of a single individual do not result in the same score, as the person's score can be expected to increase with practice over time. The reliability of an instrument is therefore established by other means, such as comparing one part of the instrument to another part (split-half reliability) or by the internal consistency of test items, computed as Cronbach's "alpha" reliability coefficient. The reliability of the LGL Reading Assessment is consistently high.
Nationally Normed - Nationally norming a test means giving a particular test to a large pool of test-takers across the nation. Because it is given across a nationally representative sample of test-takers, scores can be compared to a national norm. Unfortunately, national norming by itself says nothing about the accuracy or validity of a test. A test can be nationally normed and still be a terrible test. Therefore, it is important to note the test's validity first and foremost. National norming then becomes relevant if a percentage comparison is needed against the national norm. National norming is necessary for accountability tests that rank individual programs according to a national average.
Criterion-Referenced - Often tests fall into two categories: accountability testing and diagnostic testing. Diagnostic tests generally use criterion referencing. In other words, these tests compare specific abilities to detailed measures or standards. For instance, reading specialists may state that by the early second grade, students should have mastered certain phonological rules. If a student has not mastered those particular "criteria," he or she is considered below second grade level in that skill. The way that criteria are defined can vary according to the experts who defined them. However, for diagnostic purposes what is more important is that the same measurement is used to plot progress. For instance, a child may grow in height over the years. Whether one measures the child in inches or centimeters does not matter. What is important is that the measurer uses the same system so that when comparing measurements, growth or lack of growth can be recognized.
Research-Based - Something is research-based if it was developed by recognized experts in a particular field. If these experts have statistical data or studies to support their findings the claim is stronger. Often ties to universities or other research organizations help verify claims of being "research-based." Companies with no ties to public research institutions often lack third-party verification of claims and thus have less clout.