Skip to main content

Advertisement

Log in

A Comparison of Two Simple, Low-Cost Ways for Local, Pro-Poor Organizations to Measure the Poverty of Their Participants

  • Published:
Social Indicators Research Aims and scope Submit manuscript

Abstract

The poverty scorecard and the Poverty Assessment Tool (PAT) are two simple, low-cost ways for local, pro-poor organizations to measure the expenditure-based poverty of their participants and thus to report on (and manage) their social performance. How do the two tools differ? For estimating a group’s poverty rate, both are unbiased, and the scorecard has smaller standard errors. For targeting individual households, the PAT correctly classifies about one more household per 100. The scorecard has had greater up-take due to its edge in availability, recentness, and transparency.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Poverty scorecards (“scorecards” for short) are also called “simple poverty scorecards”, “Progress out of Poverty Indexes®”, or “PPIs®” (trademarks registered by Grameen Foundation). All the names refer to the same approach, and most scorecards are also branded as PPIs®. Like PATs (povertytools.org), scorecards are available at no cost (progressoutofpoverty.org or microfinance.com). Copyright in a given scorecard is held by its sponsor and by Microfinance Risk Management, L.L.C.

  2. The world’s first scoring tools were likewise simple and transparent. “Because the first Fair, Isaac credit-scoring systems were to be deployed in small towns in rural America at the point of sale, they had to be simple enough to be understood by people with no knowledge of statistics and no access to calculators. [This drove] the choice of statistical method as well as the card format… Scoring was to be done manually by retail clerks, [so] addition… was possible but multiplication [was not]” (Poon 2007, p. 289).

  3. FHI360 (2013 p. 9); povertytools.org/povertypres/USAID_PATs/player.html (slide 24), retrieved 26 February 2014.

  4. Poverty outreach refers to the breadth of the number of poor clients served as well as the depth of the poverty of poor clients (Navajas et al. 2000).

  5. Some pro-poor organizations may be frightened of the additional pressure that comes with social accountability, but this is their problem (Dunford 2002a, b).

  6. Women’s World Banking (2013) offers five indicators for outreach to women.

  7. Lobbying was led by the Microcredit Summit’s Sam Daley Harris with support from Muhammad Yunus, who later won the Nobel Peace Prize.

  8. That is, the author of this paper.

  9. The “flat maximum” phenomenon in the scoring literature also teaches that simple tools can be about as accurate as complex ones (Hand 2006; Baesens et al. 2003; Lovie and Lovie 1986; Kolesar and Showers 1985; Stillwell et al. 1983; Dawes 1979; Wainer 1976). Knowledge of the academic scoring literature and of the for-profit scoring industry allowed the scorecard developer to skip straight to the industry-standard Logit regression without wasting resources testing other approaches.

  10. CASHPOR itself uses both its housing index and the poverty scorecard.

  11. go.worldbank.org/T6LCN5A340, retrieved 24 Sept. 2014. DHS data comes with asset-index scores (dhsprogram.com/topics/wealth-index/, retrieved 24 Sept. 2014).

  12. A CGAP-supported asset-index approach for poverty assessment (Zeller et al. 2006; Henry et al. 2003) has mostly been abandoned in favor of the scorecard and PAT.

  13. Most countries’ proxy-means tests collect many more indicators than do the scorecard or PAT, although this adds little to targeting accuracy.

  14. A few papers compare non-accuracy aspects of poverty-assessment tools (Boucher et al. 2010; Zeller 2004; Simanowitz et al. 2000; Hatch and Frederick 1998).

  15. In this paper, expenditure is shorthand for “expenditure or income”, as some less-poor countries define and measure poverty in terms of income.

  16. A household is said to be poor if its expenditure—considering the number of its members and perhaps their age and sex—is below a poverty line such as a country’s official national line or the World Bank’s international benchmark of $1.25/day at 2005 purchase-power parity. While this expenditure-based definition is not the only (nor necessarily the best) definition of poverty, it is what people and governments usually use, and it sums up poverty in a single, understandable number.

  17. As of September 2014, scorecards are still being made and updated. No PATs have been made or updated since 2011.

  18. Figure 1 is copied from povertytools.org/countries/Peru/USAID_PAT_PERU_05-2013.xls, retrieved 3 January 2014. Figure 2 is copied from Schreiner (2012b).

  19. The PAT’s indicators and points are documented, and FHI360 (2013) is a high-level guide to linking responses, points, scores, and poverty estimates. But the process is not made clear by the PAT’s paper instrument, and few users—especially front-line workers—will dig up the documentation or understand how the parts fit together.

  20. If the goal is to improve internal management, then cheating is self-destructive. Still, perverse incentives crop up with both the scorecard and the PAT whenever the entity that measures poverty is also rewarded for finding more poverty. And if a tool is used for targeting, then respondents also have incentives to try to look poorer than they are. Neihaus et al. (2013) and Drèze and Khera (2010) suggest that one way to deter corruption is to use simple targeting tools.

  21. For a given indicator, the point value for the most-likely-poor response is always zero.

  22. On average, the scorecard supports eight poverty lines, the PAT three (Table 1).

  23. If the highest possible score is not 100 due to rounding, then make it 100 by nudging the points for the rarest “least-poor” response up or down by one or two. This hardly changes ranks and spares users from puzzling about something inconsequential.

  24. Also, potential scorecard users usually review and field-test 15–20 finalist indicators.

  25. See #12 on STAT-L FAQ, www-personal.umich.edu/~ dronis/statfaq.htm, retrieved 5 January 2014. According to Ira Bernstein, “Stepwise is no substitute for understanding the statistics, the data, and the domain. In general, because overfitting is a real issue, using theory and diagnostics to choose [indicators] that are somehow ‘non-optimal’ on the current data can nonetheless produce models that generalize better (and… are easier to explain to lay people).”

  26. This is the “concordance index” or the “area under the curve” (AUC) that plots the share of all poor households who have scores below a score percentile (vertical axis) against the score percentile of all households (horizontal axis). This is like a Lorenz curve, with “share of all poor households” replacing “share of total income” and “score percentile” replacing “income percentile”. In this sense, AUC is like a Gini coefficient.

  27. The other 19 PATs use all the survey data for construction.

  28. See IRIS Center (2005) or—for Peru—Zeller et al. (2005).

  29. q is between 0 and 1. For example, 0.5 is the median, and 0.25 is the first quartile.

  30. On $1.25/day, see Ravallion et al. (2009). The median line divides people (not households) below a country’s national poverty line into two equal-sized groups (U.S. Congress 2004). The person-level poverty rate for the median line is half of the person-level rate for the national line. The PAT incorrectly derives the median line based on households instead of people (see the appendix in Schreiner 2014).

  31. To leave the scorecard’s indicators and points untouched, the changes apply only to a scorecard’s validation sample. The changes tilt comparisons in favor of the PAT because the PAT’s construction—unlike the scorecard’s—is tuned to this line.

  32. Given N samples indexed by i, estimates e i , and true values v i , bias = \( \sum\limits_{i = 1}^{N} {\frac{{e_{i} - v_{i} }}{N}} \).

  33. Sampling variation is what makes a given single sample—even if representative—an imperfect mirror of its population. Due to “luck of the draw”, some sub-groups in the sample are under- or over-represented, even though the differences would average out in repeated samples. Like an estimator’s bias, a sample’s representativeness is defined in terms of averages in repeated samples. In a single sample, an unbiased estimator can miss its mark, and a representative sample can—and generally does—fail to represent its population.

  34. A tool may also be called overfit if it loses accuracy when it is applied to non-nationally representative samples (the second assumption fails) or when time changes the relationships between indicators and poverty (the third assumption fails).

  35. For both tools, the average absolute bias is about 1.1 percentage points.

  36. For example, if known bias is −1.1 percentage points and if the average poverty likelihood is 22.2 %, then an unbiased estimate is 22.2 − (−1.1) = 23.3 %. Bias is negative because the original estimate tends to be too low, so the adjustment increases the estimate. If bias is positive, then the adjustment decreases the estimate.

  37. The scorecard’s average absolute bias is 1.1 percentage points, and the PAT’s is 0.7.

  38. Here, statistically significant means that the absolute value of the bias of at least one of the tools lies outside the 90 % confidence interval of the absolute value of the bias of the other tool.

  39. In the context of poverty mapping, Tarozzi (2008) and Tarozzi and Deaton (2007) argue that sub-group differences can lead to large biases. Their point is parried by Demombynes et al. (2008) and Elbers et al. (2008).

  40. When setting a targeting cut-off, an organization will generally consider all four possible targeting outcomes, weighting each according to its net benefit. In this paper, targeting accuracy is compared across the scorecard and PAT using the hit rate while holding constant both the share of all households who are targeted and the underlying poverty rate.

  41. The scorecard has 20 possible cut-offs, so targeting the same share of households as the PAT requires interpolating between cut-offs.

  42. The developers of another poverty-measurement tool (poverty maps) also say that its first-stage models are too inaccurate to target individual households (Elbers et al. 2003; Demombynes et al. 2004), although Elbers et al. (2007) seems to back off this claim a bit.

  43. World Bank (2012, Indonesia); Fernandez (2012, Philippines); Camacho and Conover (2011, Colombia); Sharif (2009, Bangladesh); World Bank (2009, Pakistan); Mostafa and da Silva (2007, Brazil); and Coady (2006, Mexico).

  44. For take-up, support is important. Comparing the support available for the two tools is beyond the scope of this paper, as is comparing their overall financial costs.

  45. The annual reports do not mention it directly, but it appears that more than 100 partners should have reported PAT results in each year.

References

Download references

Acknowledgments

The Ford Foundation funded this work but is not responsible for the content. Thanks go to Frank Ballard, Dean Caire, Frank DeGiovanni, Sean Kline, Mary Jo Kochendorfer, Anthony Leegwater, Margaret Richards, Jeff Toohig, and Matt Walsh. The PAT was developed for USAID at the University of Maryland by the now-defunct IRIS Center. The author developed the poverty scorecard with support from Grameen Foundation (GF) and the CGAP/Ford Social Indicators Project. The poverty scorecard is the same as what GF calls the Progress Out of Poverty Index (PPI®), a performance-management tool that GF promotes to help organizations achieve their social objectives more effectively.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark Schreiner.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schreiner, M. A Comparison of Two Simple, Low-Cost Ways for Local, Pro-Poor Organizations to Measure the Poverty of Their Participants. Soc Indic Res 124, 537–569 (2015). https://doi.org/10.1007/s11205-014-0789-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11205-014-0789-1

Keywords

Navigation