Introduction The Sustainable Development Goals (SDGs) comprise 169 specific, time-bound, and measurable targets for 2030 to leave no one behind. The SDGs carry forward and extend global efforts to achieve the Millennium Development Goals (MDGs) for attaining socioeconomic development, with disaggregation by different dimensions, such as age, sex, geographic areas, and income. Collecting and compiling these indicators translate to enormous work for national statistical systems (NSSs) across different nations. Moreover, financial resources from bilateral grants and multi-donor trust funds for supporting statistical programs are limited and sparse (PARIS21, 2017). National governments may also not have enough budget to finance statistical development programs for various reasons. In view of scarce resources, many NSSs resort to alternative methods to meet the growing demand for SDG data along with other emerging data requirements for development planning. In countries in Asia and the Pacific, geographic disaggregation is available for a number of SDG indicators. However, data disaggregation is limited for some SDG indicators by sex, and it is also hardly available for other marginalized groups, such as persons with disabilities and indigenous people. This is based on the results of a survey of national statistical offices (NSOs) conducted in 2017 by the Asian Development Bank (ADB) and the United Nations Economic and Social Commission for Asia and the Pacific (UNESCAP). The survey further reveals that NSOs recognize that the possible solution to addressing the data granularity of the SDGs is the use of innovative data sources and techniques, particularly big data, to augment conventional data collection vehicles and methodologies. Findings from the 2017 ADB/UNESCAP survey also confirm that more than half of responding NSOs from ADB member countries are using small area estimation techniques to generate more granular statistics, particularly on poverty and to some extent, on population. Conventional small area estimation methods, particularly those used in poverty mapping, combine data from household surveys with census. Statistical models are estimated by regressing the main characteristic of interest (e.g., income or consumption for poverty mapping) that is available from a survey with explanatory variables that are both available from both the survey and census data. After model estimation, out-of-sample prediction is undertaken using the values from the census data to produce imputed values for each record in the census data. The resulting imputed values facilitate data granularity. Complexities arise when the survey and census data have reference periods that are distant from each other. Furthermore, timeliness also becomes an issue when the required surveys and census data are not collected frequently. A knowledge initiative called Data for Development is exploring the integration of traditional and innovative data sources and the application of artificial intelligence to enhance the granularity of poverty statistics. The ongoing initiative is being managed by the Asian Development Bank’s Statistics and Data Innovation Unit, in collaboration with World Data Lab, the Philippine Statistics Authority, and National Statistics Office of Thailand. An Alternative Method Using Earth Observation Data Earth observations from satellite platforms are an excellent example of a data source, which NSOs do not typically use to compile official poverty statistics. Other types of innovative data sources can also facilitate enhanced poverty statistics compilation; however, satellite imagery’s main advantage over them is that it is easier to access and less prone to geographic selection bias as it presumably covers even remote areas. To extract meaningful information from satellite imagery, we can capitalize on three fields of artificial intelligence: machine learning, deep learning, and computer vision. Machine learning is a term used to describe algorithms that are designed to automatically learn from data and make responsive decisions, rather than using preprogrammed rules. Deep learning is attributed to those machine-learning algorithms that follow a logical sequence inspired by how a human brain would make decisions. Computer vision is an element of machine learning that deals with how computers develop high-level understanding of patterns depicted in digital images. The images in the figure below illustrate what a machine-learning algorithm can do in a task based on computer vision. In this instance, the computer vision is a digital scan of a numeric character, which may have been written down by a person. Humans can easily recognize the image as the number “7.” A computer, on the other hand, sees the image simply as an area comprising different abstract patterns or “features.” To make the image meaningful to a computer, we need to train the computer to spot specific features and assign them to a particular category. In the second image, the machine-learning algorithm filters horizontal edges, while in the third, it filters vertical edges. These simple geometric filters constitute the initial steps or layers of a deep-learning algorithm. Progressively, as the algorithm’s learning process deepens, it can eventually filter more complicated features in an image. Rather than edges or simple shapes, the more advanced layers of the algorithm can filter more sophisticated patterns until the algorithm is able to classify them into their appropriate categories. Figure 1 : Illustration of a Computer Vision Task Source: Graphics generated by study team. To successfully recognize specific features and identify what is featured in an image and hence its classification, a deep-learning algorithm needs volumes and volumes of “labelled” images to train on. A labelled image is one in which we already know its classification. In the context of poverty estimation, labelled images at granular levels are limited as most poverty statistics compiled by NSOs are available at national, regional, or provincial levels and are insufficient to train an algorithm to successfully predict poverty. To address this issue, Stanford researchers proposed a transfer learning approach in which, instead of training an algorithm to predict poverty outright from daytime satellite imagery, an algorithm is first trained to predict the intensity of night lights. Using data on night lights as a proxy for economic development is arguably valid if it is assumed that places that are brighter at night are generally more economically developed than those places that are less well lit. The advantage of training an algorithm to predict intensity of night lights is that sources of data, particularly satellite imagery, are readily accessible and can cost-effectively provide large volumes of labelled images on which to train an algorithm. The result is significantly more granular data than those available through conventional poverty estimates. Without explicitly instructing the computer what to look for, a deep-learning algorithm can learn to pick out many features that are easily recognizable to the human eye—such as roads or bridges, buildings, cars, or agricultural land—which are correlated with the intensity of night lights. Once the algorithm has learned to associate specific features of an image with different levels of intensity of night lights, the knowledge can be transferred to predict poverty.  There are studies that use information about night lights to predict poverty. Others use specific image features, such as roofing material, to predict poverty. The method described here is more comprehensive as it uses more features of daytime imagery to map the spatial distribution of poverty. Results from our analytical exercise suggest that the latter yields better predictive performance. Analysis To examine the feasibility of applying this method in the context of the Philippines and Thailand, we used publicly available satellite images as we think such an approach may be attractive to NSOs that are just beginning to explore these innovative data sources and methods, and hence, greatly increases the applicability of the approach to other areas where NSOs are working on. The results are generally encouraging. In the first step, we find that our adopted computer vision technique, i.e., Convolutional Neural Network (ConvNet), is able to infer satisfactorily from features of daytime imagery the intensity of night lights (see table below). Table 1: Prediction Accuracy of Convolutional Neural Network Thailand Philippines 2013 2015 2012 2015 0.85785 0.85219 0.94150 0.93500 However, as explained earlier, prediction of night lights intensity is just an intermediate step to address the lack of poverty data, which we can use to train a computer vision algorithm. As a second step, we extracted the features within the satellite images that were used in predicting the night light intensity. This was a fairly straightforward process as these features were viewed numerically by the ConvNet as complex mathematical functions. We aggregated these data by taking the average values of these mathematical functions at the same level where our poverty data are available. Then, we regressed the poverty data on the aggregated image data using ridge regression. The resulting predictions were generally aligned with the government-published poverty estimates after calibration. Recommendations There are aspects in our adopted method that could be further improved. For instance, we note that the resolution of the input imagery has an effect on the quality of the outputs, and higher-resolution imagery is associated with better predictive performance. Hence, scaling up from exploratory studies to more rigorous poverty mapping initiatives can potentially benefit from high resolution imagery that are commercially available, as well as from more sophisticated computing tools. Another key consideration is the granularity of input data used in training the algorithm. In this study, we showed that for periods when small area poverty estimates are available, the predictive performance is satisfactory. However, there are indications that the algorithm’s predictive performance is lower if more updated, albeit less granular, poverty data from only household income and expenditure surveys are used,. This is probably because of the relatively small sample sizes of these surveys. This is an important caveat for future research, especially when only household survey data are available as input data for training an algorithm. Resources Asian Development Bank (ADB). 2020. Mapping Poverty Through Data Integration and Artificial Intelligence—Special Supplement to Key Indicators for Asia and the Pacific. Manila. ADB. 2020. Introduction to Small Area Estimation Techniques—A Practical Guide for National Statistics Offices. Manila. I. Goodfellow, Y. Bengio, and A. Courville. 2016. Deep Learning. MIT Press. N. Jean et al. 2016. Combining Satellite Imagery and Machine Learning to Predict Poverty. Science, 353(6301): 790–794. Partnership in Statistics for Development in the 21st Century. 2017. Partner Report on Support to Statistics Press 2017. Paris. United Nations. 2020. Global Indicator Framework for the Sustainable Development Goals and Targets of the 2030 Agenda for Sustainable Development. Ask the Experts Arturo Martinez, Jr. Statistician, Economic Research and Development Impact Department, Asian Development Bank Art Martinez works on Sustainable Development Goals indicator compilation, particularly poverty statistics and big data analytics. Prior to joining ADB, he was a research fellow at the University of Queensland where he also got his doctorate in Social Statistics. Asian Development Bank (ADB) The Asian Development Bank is committed to achieving a prosperous, inclusive, resilient, and sustainable Asia and the Pacific, while sustaining its efforts to eradicate extreme poverty. Established in 1966, it is owned by 68 members—49 from the region. Its main instruments for helping its developing member countries are policy dialogue, loans, equity investments, guarantees, grants, and technical assistance. Follow Asian Development Bank (ADB) on Leave your question or comment in the section below: View the discussion thread.