By Luis Benveniste, Practice Manager, Global Engagement and Knowledge at World Bank, and Silvia Montoya, Director of the UNESCO Institute for Statistics
There has been an important shift in the global measurement of learning. The Inter-Agency Expert Group on Sustainable Development Goal Indicators (IAEG-SDGs) has decided to ‘upgrade’ SDG Indicator 4.1.1 on learning outcomes: the proportion of children and young people who achieve at least a minimum proficiency in reading and mathematics. Once a ‘Tier III’ indicator (an indicator that does not yet have established methodologies or standards), 4.1.1 has been upgraded to a ‘Tier II’ indicator for two points of measurement (end of primary and lower secondary), which means it meets methodological criteria although data are available for less than 50% of countries in each region.
This upgrading shows that, with a mix of innovation, pragmatism and consensus-building, we can really start to see if children are learning the basics at the end of primary and lower secondary education and help countries improve learning outcomes by providing the necessary data. At the same time, countries that are not currently producing the data have the green light to pursue donor funding for capacity building.
While more and more countries have been assessing learning in recent years, the results have been based on different methodological approaches and could not be compared internationally. The agreement of the IAEG-SDGs to upgrade Indicator 4.1.1 up to Tier II was the result of technical analysis and a proposal from the UNESCO Institute for Statistics (UIS) and its technical partners. The aim is to put in place a baseline for measurement from 2017 and, if possible, spur greater donor investment in this area.
How do we compare learning outcomes across countries?
We began with a dose of pragmatism. To compare learning outcomes across countries, you have to address three basic questions:
- Which skills can be compared?
- For each skill, what is the minimum proficiency level defined and agreed upon by countries?
- How often should the skills be assessed in order to have an impact on policymaking while recognising the financial costs involved in testing?
The good news is that these questions have already been addressed by about one-half of the world’s countries participating in regional and international learning assessments. So instead of starting from scratch, we have found a way to anchor regional assessments and international learning assessments within a single database. The methodology builds on the work of countries and takes their investments in regional assessments to the next level – i.e. global monitoring. It builds on their national political capital, as countries have invested money, time and technical expertise in regional approaches to learning assessments for years. And over the years, they have resolved differences in language, curricula and cultural viewpoints to come up with meaningful measures that can be used across their regions.
Work is underway to build the database that will – at first – cover the share of pupils reaching minimum proficiency levels in reading and mathematics at the end of primary and lower secondary education. It will include the latest available data from major regional assessments, such as Trends in International Mathematics and Science Study (TIMSS) and the Programme for International Student Assessment (PISA), both of which have released new data over the past 10 days.
The data will be validated by countries and will provide us with the first baselines to start measuring learning internationally in 2017. We also intend to gather data on the minimum proficiency levels in the early grades of primary school (Grades 2 and 3) which is still classified as a Tier III indicator, but this will require more discussion among countries. They must decide which foundational skills should be measured and how. The ongoing discussion “underscore the need to balance a sound theoretical approach with practical considerations, which will likely delay agreement on cross-nationally valid measures and benchmarks,” as stated in the 2016 edition of the Global Education Monitoring Report.
Baseline data for 2017
This database and the upgrading of Indicator 4.1.1 reflects more than a technical solution – it has major political implications. For the first time, we will have a baseline in 2017 on the extent to which children and young adolescents are actually acquiring the minimum levels of skills in reading and mathematics based on a set of harmonised criteria established by the mass of countries participating in regional and international assessments.
This breakthrough also paves the way to a possible lead indicator that would go straight to the heart of the SDG 4 agenda: to ensure that all children are in school and learning. Many stakeholders, notably the Education Commission, are urging the development of a lead indicator that would reflect access to and quality of education. It would combine Indicator 4.1.1 (share of children at the end of primary with a minimum proficiency in reading and mathematics) with the completion rate for primary school-age children and/or the out-of-school rate for this age group – data which are already produced by the UIS.
The IAEG is going to consult on a list of additional indicators, including either the out-of-school children rate or the completion rates, in the global monitoring framework for SDG 4 to help ensure that no child is left behind.
In parallel, we continue to work with countries to link their national-level assessments to a set of universal learning scales. Again, this is about helping countries to make the best possible use of their own data and systems, rather than adding another layer of assessment. What countries need and want is a set of tools to finetune or align some of the technical components of their own assessment systems to the global metrics.
How are we helping countries to measure learning?
A network is already in place to move this forward. The Global Alliance to Monitor Learning (GAML), established by the UIS, provides methodological solutions to develop the new indicators on learning that we need to achieve SDG 4 and to set the standards for good practices on learning assessments. Member States and technical experts from around the world have come together to develop an innovative but pragmatic approach that recognises diversity while yielding internationally-comparable measures of learning.
At the same time, we want to help countries develop their own assessment systems or improve the quality of those existing. So through GAML, the UIS and the Australian Council for Education (ACER) are also developing a series of tools, such as the data quality assessment framework that will help countries strengthen their systems while identifying statistical capacity needs for donors.
Another key partner is the World Bank, with its SABER Student Assessment, which is enhancing the global knowledge base on effective student assessment policies. Its comprehensive framework and related diagnostic tools help identify the key quality drivers that need to be addressed in order to strengthen the quality and utility of the information produced by assessment systems. Education policymakers and practitioners can also use the SABER framework and methodology to help enable cross-country learning and foster informed dialogue and decisionmaking.
The upgrade of Indicator 4.1.1 reflects a global consensus for our way forward. Countries understand that metrics are not perfect and that there will always be areas that are not 100% comparable. But they also appreciate the promise of having a mechanism to inform, guide and monitor our commitment to improve learning for all children, regardless of their circumstances. The UIS and its many partners in education data collection remain dedicated to maintaining an unwavering focus to refine and improve measures of student learning, while supporting countries to build capacity for their implementation.