×
Thomas Fordham Institute - advancing educational excellence logo

ESSER is fueling one-size-fits-all strategies. Let’s use data to deliver more targeted efforts.

Marguerite Roza and Ellie Roza
Published June 30, 2022 on The Thomas B. Fordham Institute

States and districts face no shortage of seemingly overwhelming problems, especially the devastating learning loss among vulnerable students from extended pandemic school closures. But leaders do have money: States and districts got $123 billion in federal emergency (ARP ESSER) relief. While it’s a big number, it’s definitely not enough to do everything, and that makes it incumbent on leaders to figure out how to get the most from the finite dollars that they have.

In the first year, many leaders used relief funds for uniform efforts that essentially delivered the same treatment to every student, classroom, and school on their watch. ESSER funds were used to lengthen the school day, extend the year, reduce class sizes, distribute laptops, fund teacher professional development days, and grant across-the-board pay bonuses. These approaches—while useful for some challenges—are inherently expensive because they treat everyone as having the same needs. And in some cases, they’re not concentrated enough to solve the problem.

As we approach ESSER’s halfway mark, leaders face choices with their finite remaining dollars. Some will double down with heavy investments in more one-size-fits-all approaches. But others will find that a more effective approach is to target investments precisely to the specific students, classrooms, or schools where investments are needed most.

There’s a catch, however, to making targeted investments: It’s only possible when leaders have data. They need data that tell them which students need what and which parts of the system could be working better. And they need to look closely to determine exactly where to target limited dollars.

Take one of the most pressing challenges: reading. For a host of reasons, the pandemic has left some students further behind than others. When districts attempt to apply the same remedy to all students (for example, smaller class sizes or longer school days), the investment is effectively diluted because some of those students don’t need as much help as others. Rather than offer the same services to all, schools can deploy more customized recovery services if they’re armed with data on which students are reading at what level. A narrowed focus could mean coordinating with parents of those struggling the most to ensure that targeted tutoring or summer school investments are designed for and ultimately reach their children.

Sometimes it is parts of the system that need fixing. After finding that some classrooms were using outdated and less effective strategies to teach reading, many states now are working hard to retool reading and confronting the fact that changing teachers’ practice is tough. Sweeping statewide action makes sense for setting uniform policy and purchasing curricula. But we’re seeing one-size-fits-all approaches in other areas, too—like professional development—that could waste precious time and money.

Layering in a more targeted approach may make sense. For example, why pay to put the entire teacher corps through intensive training if some teachers are already up to speed or well on the road to getting there? Why not use data that clearly point to where in the state students aren’t getting what they need, and then direct the dollars to deliver more intensive training to teachers in the schools where the problem is most acute?

Crunching the data is important because it shows where the problems are and helps leaders prioritize interventions. Let’s return to the reading challenge. As a case analysis, we pulled existing data on Washington State’s pre-pandemic (2018–19 school year) reading scores and identified elementary schools that substantially underperform their peers on reading, even after accounting for their student characteristics.

We then surfaced those schools where at least 10 percent more students failed to pass their grade-level state reading test than statistical modeling predicted. For a school of 600 students, that means at least sixty more students failed than would be predicted in the models.

We know that scores tend to be higher for affluent schools than for those serving many high-poverty students. But this statistical modeling accounts for that. And it’s clear that the underperforming Washington schools that we identified aren’t limited to those that are high poverty or that predominantly serve students of color.

Scenic Bellingham, a coastal college town, for instance, has fourteen elementary schools, four of which popped up in the data as way-underperforming on reading. Two are predominantly low-income; the other two are decidedly not. Seattle has eight underperforming elementary schools of varying poverty levels. Tacoma has four, and Bellevue has two.

The data clearly show where there’s a problem. Could there be explanations for these reading outcomes other than outdated curricula and poor instruction? Of course. (And given that these are two-year-old data, some schools may have already shifted course on reading.) But these data do show leaders where to start digging deeper. If it turns out the obstacle isn’t a reading curriculum, teacher training, or implementation issue, then leaders can figure out what else is going on and work to fix it.

When it comes to early reading, the cost of inaction is real for both budgets and student learning. Left unattended, reading struggles can trigger referrals to costly interventions, where learning specialists go back and try to teach students to read after the fact, which—sadly—we know is a very heavy lift. When more students are identified as needing special education services, the expensive legal obligations land on states and districts.

Using data to deliver targeted interventions should extend far beyond reading, however, and leaders urgently need to get measuring, analyzing, and fixing before the relief dollars run out. The pandemic left schools with mammoth challenges. Using data to zero in on problem hotspots makes tackling them much more manageable. Every state and district could—and should—run these types of analyses in order to target relief dollars so that it can start fixing them.

Too often, leaders default to costly one-size-fits-all approaches that spread resources a mile wide and an inch thick. One explanation for the heavy reliance on uniform delivery is that education systems aren’t driven by data. Some leaders eschew measurement, perhaps worried that it brings judgment or punishing accountability.

Yet other sectors systematically tap data to identify and solve problems. Education needs to follow suit. In this case, the goal of using data is to prioritize and target interventions, given that there’s no bandwidth or money to fix everything, everywhere, all at once.

And we need more urgency around deploying data right now, to get kids on track while federal relief money is still on the table. Come 2024, the financial windfall that district and state leaders have at their fingertips runs out—and by then, the usual sources of revenue may be hit hard by changes in the national economy.

States can push their districts to better use data to drive spending decisions. Doing so would make the most of relief funds while they exist and, where students get up to speed, would ease future cost burdens. Now more than ever, we need leaders to use data to point the way.

This commentary was originally published by the Thomas B. Fordham Institute.

Read the commentary

Contact edunomics@georgetown.edu for an accessible version of any publication or resource.

2022-07-01T21:52:11+00:00