No. A range of tools are available for rating curriculum options, but not for rating assessment options, which are harder to compare (like apples and oranges). DIBELS is approved in 91% of states with published lists, six others are approved ~50% of the time, and many others appear on 5 or fewer state lists.
Curricular options tend to meet EdReports criteria, with CCSS-alignment carrying the most weight, but fall short on most other measures of quality.
No. Odegard and colleagues have demonstrated that screening for dyslexia in particular does not identify more students with dyslexia, and may unnecessarily bias diagnoses towards white, middle class students and away from others.
Similarly, Hill and colleagues at Harvard’s Center for Education Policy Research have demonstrated that simply gathering more screening and progress monitoring data has not improved student outcomes, despite the significant investments in infrastructure, time and energy.
“In the past two decades, researchers have tested 10 different data-study programs in hundreds of schools for impacts on student outcomes in math, English/language arts, and sometimes science. Of 23 student outcomes examined by these studies, only three were statistically significant. Of these three, two were positive, and one negative. In the other 20 cases, analyses suggest no beneficial impacts on students. Thus, on average, the practice seems not to improve student performance.”
Testing, and certainly over-testing, has consequences for time and attention that are not guaranteed to lead towards positive outcomes.Whatever the organization publishing the rating tool values is what counts as quality, and few rating tools focus on overlapping areas. No existing ratings tools for curricula have been validated against actual outcomes. In fact, well-known rating systems, like EdREports, lean heavily on indicators of quality that lack empirical evidence, like standards alignment. These tools are best understood as expressions of value, not indicators of quality.
The early research suggesting that most curricular materials were of low-quality was measuring the alignment of materials to Common Core State Standards (CCSS). Yet, CCSS-alignment itself has never demonstrated a positive effect on student achievement.
Existing rating systems concentrate on the information provided in technical manuals for each assessment to estimate how “convincing” indicators of things like feasibility, classification accuracy, reliability, and validity are. All available tools publish this data. However, since assessments often represent different designs and types (e.g. computer-adaptive or fixed question; norm- or criterion-referenced) the process used to demonstrate such indicators are varied. Even within an assessment type or style, test developers have choices about how they demonstrate the validity and reliability of their tools. This makes direct comparisons between tools nearly impossible.
In addition, each subtest will have reliability and validity ratings with some tests including subtests that earn both very high and very low ratings.
One key question when considering quality is not if the tool is “good” but what is it “good” at measuring? Assessments will indicate validity by explaining how their scores align with or predict scores on other known assessments. Knowing what a test has been optimized to measure or predict should help you decide whether its scores are even of interest, regardless of whether they are relatively trustworthy. This is why “alignment to other assessments” is reported on this page whenever possible.
It is difficult to estimate the actual cost of a program or screener because quotes are often dependent on size and bundling. Even products with relatively transparent pricing systems advertised online (e.g. $1 per child, or free) can be difficult to estimate because they can be bundled (for “free”) with another product, including curriculum, assessment, intervention and PD, or have invisible costs such as printing, storage and data management in the case of free or open-access materials.