Glossary

Evidence-Based

According to the federal definition:

an activity, strategy, or intervention that
  • (i) demonstrates a statistically significant effect on improving student outcomes or other relevant outcomes based on:
    • (I) strong evidence from at least 1 well designed and well implemented experimental study;
    • (II) moderate evidence from at least 1 well designed and well implemented quasi-experimental study;
    • (III) promising evidence from at least 1 well designed and well implemented correlational study with statistical controls for selection bias
  • (ii)
    • (I) demonstrates a rationale based on high quality research findings or positive evaluation that such activity, strategy, or intervention is likely to improve student outcomes or other relevant outcomes; and
    • (II) includes ongoing efforts to examine the effects of such activity, strategy, or intervention

Integration Opportunity

Time allocated for integrating reading or writing skills that are often practiced in isolation. These include the following activities: Independent reading (integrates word recognition, fluency, vocabulary, comprehension), and shared reading with discussion. Shared reading without discussion, especially choral or echo reading, does not require integration as students can follow along with the group without integrating their own recognition with comprehension. Similarly, independent or interactive/shared writing that includes discussion can be an integration opportunity, but guided writing may not offer the same opportunities for integration. Likewise, discussion after a read-aloud may be considered to address comprehension, and model fluency, but not provide an opportunity for integration of comprehension and fluency.

Integration opportunities often require a close analysis of individual lessons that is difficult to accomplish using only sample materials.

In order to facilitate comparison, integration opportunities are calculated across a single week of instruction. These are calculated for grade 3 when possible to showcase opportunities for independence that may not be as obvious in the first units of 1st grade.

Cultural Responsiveness

Cultural Responsiveness is most often measured using the Culturally Responsive Curriculum Scorecard, developed and validated by The Education Justice Research and Organizing Collaborative (EJ-ROC) at New York University. The ELA scorecard can be accessed for free and used with groups of educators and community members to rate cultural responsiveness in a given context. The ratings incorporated here come from a report issued by the scorecard creators in collaboration with a multistakeholder, multiperspectival group considering the curriculum options under consideration in NYC at the time.

CCSS-Alignment and Feasibility

CCSS-alignment and feasibility ratings are drawn from EdReports profiles using these criteria:

EdReports prioritizes CCSS-alignment and will not continue to rate programs that do not demonstrate alignment according to their criteria. This requires curricula to pre-select texts or example texts that will be used in each lesson so that grade-level complexity can be assessed. Curricula that leave significant room for individual teachers or students to select texts or topics cannot be rated and therefore earn no or low ratings.

Content focus

Content focus is reported using the ELA Knowledge Maps created by Johns Hopkins University’s Institute for Education Policy.

Classification Accuracy

Classification accuracy is reported using ratings from the National Center on Intensive Intervention’s academic screening tools chart.

Products advertised on same landing page

It is common for districts to be offered quotes that bundle multiple items together, including products from the same company for other grades, content areas, and uses (e.g. curriculum, assessment, PD, data management, data storage). This variable indicates product ads and suggestions found on the landing page for each program and assessment.

Alignment to other assessments

One key question when considering quality is not if the tool is “good” but what is it “good” at measuring? Assessments will often indicate validity by explaining how their scores align with or predict scores on other known assessments. Knowing what a test has been optimized to measure or predict should help you decide whether its scores are even of interest, regardless of whether they are relatively trustworthy. This is why “alignment to other assessments” is reported on this page whenever possible.