Major support for MindShift comes from
Landmark College
upper waypoint

Deconstructing "What Works" in Education Technology

Save ArticleSave Article
Failed to save article

Please try again

TB

Over the weekend, The New York Times published the second story in its series on "Grading the Digital School." The first story in the series questioned the massive expenditures schools make on education technology, pointing to stagnant test scores as an indication that these investments might not be worth it. Last weekend's story extends that skeptical eye to the outcomes from educational software, again questioning whether using these tools in the classroom makes much of a difference in terms of student achievement.

The story focuses on the claims made by Carnegie Learning that its math software provides "revolutionary results." But Carnegie Learning is hardly alone in the claims it makes about the effectiveness of its products. Adjectives like "revolutionary," "innovative," "feature-rich," "content-rich," "engaging," "standards-based" pepper the landscape. And much like the well-known advertising claim that "4 out of 5 dentists surveyed recommend Trident gum to their patients who chew sugarless gum," educational software frequently touts claims about the satisfaction and success of the teachers, students or administrators that use the products.

The New York Times article interrogates these claims, noting that "amid a classroom-based software boom estimated at $2.2 billion a year, debate continues to rage over the effectiveness of technology on learning and how best to measure it."

The article points to the Department of Education's What Works Clearinghouse (WWC) as a resource for school districts to use to help make better procurement decisions. As the name suggests, the What Works Clearinghouse is meant to be a "central and trusted source of scientific evidence about 'what works' in education."

The WWC covers a range of educational topics -- math, literacy, science, special education, drop-out prevention, for example -- as well as for multiple age groups and different populations, such as English-language learners. The Web site contains an interface that lets schools search for the "interventions" it wants (that is, for the educational programs, practices, or policies) -- with options to search for specific topic areas, age groups, populations, delivery method and program types. It also lets users return search results by the "effectiveness" of the programs -- in other words, positive effects, potentially positive effects, mixed results, negative effects or potentially negative effects.

Sponsored

The WWC does not conduct research itself but rather reviews the research of others. That's a key piece here, as the WWC wants to ensure that what "counts" as effective educational software is really supported by rigorous research. According to its procedures and standards (PDF), "the WWC only reports impacts for studies for which it has high or moderate confidence that the effect can be attributed solely to the intervention rather than to the many other factors that are at play in schools and in the lives of students." In other words, the WWC is supposed to offer a "Good Housekeeping Seal of Approval," of sorts, for educational research as well as for the educational software and practices it examines.

But there are problems with this approach.

Even the Times piece points out that many schools do not know about and thus do not use the research there. According to a study last year by the General Accounting Office study, just 42% of school districts even know about the Web site; and of those that do, they say they only use it for a "small or moderate extent" in decision-making. That may be in part because the Web site doesn't say anything about the costs of these programs, so schools might not feel they have all the information they need in order to make a purchasing decision (the software with "the best results" might not be within everyone's budgets). And the research on the site often isn't quite up-to-date (the "relevant time frame" for research covered by the WWC includes anything from the past 20 years).

Furthermore, the research actually only covers a limited number of software products. In part, of course, that's because of the site's stringent policies on "what counts" as research. But it's also a reflection of the types of products that get scientific research done about them -- that's often the software created by the major companies. With that limited scope, the site doesn't really help schools that want to use products from smaller companies or from app developers.

An article by University of California, Berkeley professor Alan Schoenfeld, a former "senior content adviser" for the WWC points out some of the other problems with the organization (PDF). Schoenfeld notes some of the very real complexities and challenges of conducting educational research and evaluating curricula. He also suggests that, despite its claims of scientific rigor, the WWC is often mired in the politics of education policy and ideology.

That brings us full circle, in some ways, to the Times series on "Grading the Digital School." In both stories, it's standardized test scores that are used as the measurement for what's working and what's not working in education technology. That is, after all, what the Department of Education counts as well.

But is that really the only or the best measure for the success of any educational effort, whether it's digital or not? And if it's not, how then can educators and parents identify educational software that's worth using? How will we measure "what works" in ed-tech?

- To learn more, read:

lower waypoint
next waypoint