Our obsession with adopting education strategies from countries that fare better in rankings is unhelpful, say leading education statisticians. They suggest how policymakers could use data more wisely
In this week’s issue of Science, two leading American educational statisticians argue that, although international large-scale educational assessments (ILSAs) can provide invaluable insights, nations’ obsessions with their rankings alone are unhelpful.
In 2012, the Programme for International Student Assessment (PISA) reported that the seven most effective maths education programmes were all provided by schools in East Asia – three in urban parts of China, with Shanghai at number one. The UK appeared just 26th on the list. The UK government’s schools minister Nick Gibb launched a programme in which ‘mastery textbooks’ were introduced into around 50% of English primary schools, to introduce the approach to mathematical pedagogy found in the Far East to English schools.
Interpreting rankings as signalling something about the relative efficiency of different educational systems is entirely problematic
It is policy decisions like this, based on inferences from international rankings, that Henry Braun from Boston College in Massachusetts, US, and co-author Judith Singer of Harvard University, also in Massachusetts, find problematic.
The first issue, they argue, is that apparently similar samples from different countries may actually be quite different on closer inspection. Until recently, for example, China banned migrants from the countryside from enrolling in city schools. So these urban poor were excluded from the city schools’ statistics.
‘Interpreting rankings as signalling something about the relative efficiency of different educational systems is entirely problematic because there are so many other factors that contribute to a country’s standing,’ says Henry. ‘In East Asian countries that typically top the rankings, a large proportion of students – sometimes more than half – participate in after-school tutoring, so the exemplary overall results are really what the regular school system is doing plus the effect of these tutoring schools.’
Henry and Judith argue that ILSAs can be more valuable if analysed subtly and combined with other data. They take comparisons of US 2012 maths PISA scores with Canada’s and Singapore’s scores as an example. ‘Singapore is a small city state and the United States is a gargantuan country with 51 different education systems, so trying to draw lessons from Singapore to the United States would be a difficult stretch,’ says Henry. ‘But Canada has a provincially organised education system just like the United States has a state-organised one, and the two countries share not just a continent but many cultural and educational influences. So the fact that Canada as a country does quite a bit better than the United States as a country … certainly suggests the need for further investigation.’
The duo make several suggestions, but they suggest the ‘most likely to yield improvements in education policy’ would be viewing ILSAs as a source not of direct lessons about education policy but of hypotheses for further tests in randomized field trials. ‘There are any number of reports that look through the international assessment results and say, “Here are the leading countries; this is what they all have in common; this is what everybody ought to do”’ concludes Henry, ‘and I think that’s a very superficial and ultimately unproductive approach to helping countries.’ A better way, Henry suggests, would be for them to, ‘figure out how they stack up against other countries with similar educational and economic backgrounds, and then to figure out what they would adapt – rather than adopt – from other countries to try to do better.’
J D Singer and H I Braun, Science, 360, 38 (DOI: 10.1126/science.aar4952)