Keywords

7.1 Inequality, Inequity and Disparities in Agriculture and Nutrition

7.1.1 Motivation and Guiding Questions

So far, we have seen how each person’s choice among their options leads to the societal outcomes we observe, in ways that depend on private transactions in markets and collective actions through policies and programs. We saw how market failures and policy failures can be understood and potentially overcome, allowing each population to reach higher levels of wellbeing. Our analytical diagrams helped explain outcomes at any one point in time, for each person and the population, showing the causal mechanisms needed to make qualitative predictions about the total or average outcome per person in the group. This chapter begins the second half of this book, turning from qualitative models to empirical measurement: how do economic principles play out in practice? How well have individuals and populations succeeded in meeting their needs?

To describe observed patterns, we use scatterplots, bar charts and line graphs that show differences between groups and also changes over time. The economic principles introduced in Chapters 16 were shown in stylized models, and now we shift from theory to observation of stylized facts. Each metric or indicator results from primary observations, such as surveys of people or an organization’s administrative records, transformed into a variable designed to track an important aspect of wellbeing such as food insecurity. The stylized facts we observe include both the total or average for entire groups, and also the degree of variation among individuals within groups.

This section begins our exploration of the data with the fundamental question underlying all economic measurement: do people have enough things to reach a socially acceptable level of wellbeing? Is the distribution of resources and outcomes among individuals and between groups improving or worsening?

By the end of this section, you will be able to:

  1. 1.

    Describe how economists measure deprivation, inequality and inequity using poverty lines, Lorenz curves and the Gini index;

  2. 2.

    Describe how poverty lines have been and are determined in the U.S. and around the world;

  3. 3.

    Summarize the findings of recent household surveys and other data on poverty, inequality and inequity in the U.S. and worldwide; and

  4. 4.

    Summarize the differences between measured poverty, inequality and inequity in market incomes before and after accounting for taxes, transfers and government programs.

7.1.2 Analytical Tools

The data we have are empirical observations, made by people to answer practical questions. To guide decisions, we need observations that correspond to the concepts we care about. This book begins our exploration of observed data with measurement of how well individuals and groups have achieved a standard of living that meets human needs and is socially acceptable.

The toolkit of economics starts with the causal diagrams introduced in the first half of this book, used to guide creation and interpretation of the measurement tools introduced now. Those models showed how production of food and other things is linked to consumption and each person’s standard of living, with an important role for both individual choices and collective action in helping each person reach their goals. Each outcome is the result of multiple factors interacting in each ways that depend on market structure and public-sector intervention. Now that we turn to measurement, each data point we observe could potentially be explained using our analytical diagrams, and we will occasionally redraw those diagrams in this second half of the book, but our goal is to describe the most important outcomes for groups and individuals.

7.1.2.1 Understanding Deprivation: The Lived Experience of People in Poverty

Poverty is the state of not having sufficient resources to attain a population’s minimum standard of living, typically defined in terms of the basic necessities required to participate in the economic and social life of that society. Some of these needs are universal human requirements, such as food and clothing, but the level and nature of basic needs such as housing, transport and communication vary over time and place. The criteria and methods used to measure whether people can meet their basic needs also vary, but generally involve survey data on household income, expenditure or assets relative to a poverty line or other criteria. Almost all governments and several international agencies track the ‘headcount’ number of people below various poverty lines to target social programs and evaluate economic policies, as well as the poverty rate defined as the percentage of each population living below a given poverty line.

Beyond material deprivation, many people experience social exclusion based on their appearance, ancestry or religious beliefs, legal status or other aspects of identity. The term ‘marginalization’ refers to exclusion from cultural or political influence, which can be both a cause and a consequence of poverty. Economic analysis shows how decisions at the margin of production and consumption drive the prices and quantities we see, and understanding the lives of people at the margins of society is similarly helpful to see the degree to which a population’s goals are being met.

Economic analysis of poverty begins with the material requisites of wellbeing for individuals and households and adds up outcomes for social groups who have experienced varying degrees of social exclusion due to their group identity. Measurement starts with purchasing power over all goods and services, which is closely linked to a wide range of measurable outcomes such as the heights of children. Household incomes are closely linked to individual outcomes partly due to each person’s own spending, and partly due to social and environmental factors that are correlated with both incomes and outcomes. For example, changes in child height are influenced by things each family buys or makes for themselves such as food and housing, as well as things that higher-income communities obtain through collective action such as clean water, and things that help drive the higher income such as the community’s level of education. The data in this book focus on change in agriculture, food and nutrition, which is closely related to other aspects of life that would be measured in different ways. Other fields of economics focus on data relating to education and cognitive development, physical health and disability, mental health and distress, employment and livelihoods, housing and transportation or many other aspects of poverty beyond the focus of this book.

Measuring poverty is difficult due to limited data, especially about people and aspects of life that were not historical priorities for data collection and analysis. Data availability is itself an important aspect of economic and social development, steered by the willingness and ability of people to devote their time and resources towards obtaining more detailed and accurate information. The first major agricultural census of the English-speaking world was the Domesday Book done over 900 years ago by the King of England to guide tax collection. In recent decades many countries have attempted to collect nearly complete census data of all agricultural enterprises, and many more conduct nationally representative sample surveys every few years. In addition to those large and costly household consumption surveys, a wide range of other data is commonly used about specific aspects of household wellbeing.

The definition of poverty usually focuses on households because many aspects of each person’s living standards are pooled among people living together, especially regarding the wellbeing of children. Measurement also usually focuses on income and expenditure over an entire year to smooth out short-term fluctuations in what people can acquire, and measurement of poverty often also aims to take account of assets and wealth, which provide useful additional information about people’s ability to meet their needs over a longer time horizon.

Poverty can be defined and measured using either income or expenditure. Measuring income is preferred in populations where most work is in the formal sector, so labor earnings, profits from a business, or rent and interest payments from other assets are all recorded and can readily be reported as the individual or household’s total income for the year. The resulting data may include only ‘market income’ or may be defined more broadly as after-tax income that includes all payments to the government and receipt of social assistance. Both distributions are important for equity. In places where a large fraction of households are self-employed family farmers or workers in the informal sector, most income is not recorded, and it is preferable to ask people about their consumption and expenditure over the past month or year. Those household surveys typically aim to ask about consumption from all sources, including food produced by the household themselves.

All poverty is inherently multidimensional, starting with the definition of income and expenditure as the household or individual’s purchasing power for all goods and services. Some metrics also count education and health as separate dimensions of wellbeing, as in the Human Development Index used since 2010 by the United Nations Development Program (UNDP), which adds up progress in three directions: health (measured by height, weight and child mortality), education (measured by school attendance) and living standards (measured by a set of physical assets such as electricity and housing). Other organizations have proposed different multidimensional indexes as a summary metric for advocacy purposes, but researchers typically prefer to use separate indicators for health, educational attainment or other nonmarket aspects of wellbeing, for comparison to poverty in terms of market goods and services that could be obtained through the household’s own income and expenditure.

7.1.2.2 Defining Poverty: Mollie Orshansky and the U.S. Poverty Line

One of the oldest poverty lines in continuous use by a national government was introduced in 1964 to guide U.S. President Lyndon Johnson’s War on Poverty programs, using methods developed by an economist in the Social Security Administration named Mollie Orshansky. Orshansky’s poverty line used market income relative to food spending and has remained the U.S. government’s official definition of poverty with only modest adjustments over past sixty years. We will describe the U.S. poverty measurement methods and results in some detail, first because the U.S. experience demonstrates the close link between poverty measurement, household food spending and nutrition assistance, and because the resulting data offer an unusually long period of continuous measurement using a transparent and comparable method.

When Mollie Orshansky set out to develop a politically and socially acceptable poverty line for the U.S., the USDA had just published a revised set of low-cost food plans that would meet nutrient requirements using a variety of foods widely consumed by Americans. Orshansky had previously worked in the nutrition department at USDA, and she was able to use the most recent diet plan for 1961 to identify the cost of a minimally acceptable diet for households of varying size and composition. Orshansky had also worked with the USDA’s nationally representative household food consumption survey of 1955, which showed Engel’s law at work, driving lower-income households to devote a larger fraction of their expenditure on food. Orshansky found that the average U.S. household was spending one-third of their income on food, and successfully argued that having to spend more than that to buy a minimally acceptable diet was a clear sign of being poor in America.

The U.S. poverty line introduced in 1964 was defined as three times the cost of minimally acceptable USDA food plans for each member of the household, with small adjustments for households of one or two people. That procedure turned out to be consistent with many people’s intuition about living standards in America at that time, yielding a threshold just over $3000 for a family of four. Having gained sufficient consensus for adoption of that standard, the next challenge was how to adjust the line for inflation over time. Until 1968 the Social Security Administration recalculated diet costs each year using new food prices, but in 1969 the U.S. Census Bureau and other Federal agencies introduced a simpler method that has been used ever since. They reverted to the 1963 diet costs for each size household and adjusted the resulting income level by the country’s overall consumer price index (CPI) each year.

For calendar year 2023, the updated U.S. poverty level is around $30,000 per year for a family of four. Raising poverty lines by only the CPI, when other Americans’ incomes have risen by more than CPI, has let the U.S. poverty line fall relative to the income of most Americans. When Mollie Orshansky set her threshold, it was 44% of the median income for a family of four, and as of 2023 the official poverty line is only 28% of the median income for a family of four. The USDA has also continued to update its low-cost food plans, which now add up to about $11,500 per year for a family of four which is 38% of the poverty line, instead of the 33% share used by Orshansky. The official U.S. poverty line has fallen relative to other incomes, but most U.S. anti-poverty programs set their threshold at a higher level. For example, the Supplemental Nutrition Assistance Program (SNAP) is open to households with gross incomes up to 130% of the poverty line, while eligibility for the supplemental nutrition program for Women, Infants and Children (WIC) allows up to 180% of the poverty line.

Beyond household income, measurement of person’s wellbeing can include wealth and assets as well as age and disability status, all of which are counted in addition to income as factors in eligibility for many anti-poverty programs in the U.S. and elsewhere. A further question is how to account for differences in the purchasing power of household income and program benefits at each place and time. The U.S. national poverty line uses a single CPI reflecting the average expenditure pattern of consumers in all urban areas of the country, with a higher poverty line reflecting higher cost of living only for Alaska and Hawaii, but some anti-poverty programs recognize the role of regional price differences. The U.S. has especially large variation in housing costs, due in part to local government regulations that limit the placement and size of new buildings. Without those limits, housing supply would respond more quickly to demand, with prices set by the marginal cost of construction and utilities. Rules that limit the height and density of construction make supply inelastic, so rental costs vary with demand which is higher in places with higher incomes, due to both earning opportunities from local employment and local amenities that attract high-income residents. Variation in rental prices is one reason why eligibility for the U.S. housing assistance program known as section 8 is one half of each area’s median income, and the SNAP formula also takes account of housing costs to some degree, by raising the assistance provided to most recipients for whom housing costs would otherwise exceed half of their net income.

Poverty thresholds are used not only to count the number or fraction of people in poverty and to determine eligibility for anti-poverty programs, but also to determine each household’s depth of poverty below the threshold which can be used in anti-poverty programs to determine the level of assistance provided. In the U.S., for example, SNAP provides a variable level of cash-like assistance designed to ensure that households can afford the USDA’s minimally acceptable diet, now known as the Thrifty Food Plan. The composition of that diet is adjusted periodically, most recently in 2021, and its cost is updated monthly based on national average food prices. The program’s maximum benefit, provided to households with zero income, is the entire cost of the Thrifty Food Plan. Actual benefits are set by the SNAP formula, based on the longstanding expectation that food spending should not have to exceed 30% of the program participant’s income, net of deductions such as the housing cost adjustment. Benefit levels are small for households near the threshold of eligibility, thereby linking the level of assistance to the population’s depth of poverty.

7.1.2.3 Measuring Poverty: Trends and Disparities Among Groups in the U.S.

When the U.S. government adopted Mollie Orshansky’s method in 1964, her formula was used retroactively to construct an estimate for 1959. The net result is more than sixty years of data to track changes in poverty rates and inequities between demographic groups as shown in the charts starting with Fig. 7.1.

Fig. 7.1
Two line graphs plot the number of people in poverty in millions and the percent of the poverty rate of the U.S. population from 1959 to 2022. The number of people in poverty in 2022 was 37.9 million. The poverty rate of the U.S. population in 2022 was 11.5 percent.

Source: Reproduced from Emily A. Shrider and John Creamer, Poverty in the United States 2022 [U.S. Census Bureau, Washington DC, September 2023]. Updated publications in this series are at https://www.census.gov/library/publications/time-series/p60.html

Number and percent of people in poverty in the U.S., 1959 to 2022

Figure 7.1 is the book’s first descriptive chart, using line graphs to illustrate change over time. Later figures will use bar charts to compare magnitudes of discrete categories, or scatterplots to show a larger number of individual observations. As with the analytical diagrams in Chapters 16, the first element of each chart is its title and axis labels, identifying what's being shown. In Fig. 7.1, the lower panel shows a range from 0 to 25% of the U.S. population, and the top panel shows a range from 25 to 50 million people, with a break denoted // to show that the vertical axis does not start at zero. Along the horizontal axis, both lines are shown to have breaks in 2013 when survey questions about income changed slightly, and in 2017 when U.S. data-processing systems changed slightly. The note below the chart indicates its source. In this case we reproduce the actual chart published by the U.S. government, in part because their graphics are of very high quality, but also because comparable charts are published each year so that updated versions can readily be obtained from the U.S. Census Bureau website.

The changing prevalence of poverty shown in Fig. 7.1 provides a valuable introduction to data visualization. Here we focus on change over time, and later charts will show differences by income level. Incomes often (but not always) grow over time, so both kinds of chart trace out patterns associated with the process of economic development, similarly to the way we might trace out a child’s height relative to other aspects of child growth. In Fig. 7.1 and other time series, we can see some fluctuations that rise and fall repeatedly like waves, some sustained trends that persist from year to year or decade to decade and some inflection points where the trends change. We also notice artifacts created by the measurement method that do not reflect reality, in this case the apparent jump up in 2013 that was created by a change in how the survey asked people about their income.

Figure 7.1 shows that in 1959 about 40 million people or 22% of the U.S. population had market incomes below the poverty line. In other words, more than one in five Americans could not afford to buy the USDA’s low-cost diet and still spend only a third of their income on food. By contrast for the world, a comparable kind of metric introduced by the FAO and the World Bank in 2022 showed that about 3 billion or 38% of the entire global population could not afford a benchmark low-cost diet. The global benchmark diet and income shares used for that global monitoring differ from those initially used to define the U.S. poverty line, but the same procedure was applied almost sixty years later.

As shown in Fig. 7.1, poverty rates in the U.S. dropped sharply for a decade from 1959 to 1969, followed by fluctuations in the poverty rate around a trend increase in the number of people in poverty, as the overall U.S. population grew. The absolute number of people in poverty peaked in 2010–2014, then both the rate and the number dropped sharply to a historic low rate in 2019 just before the COVID-19 pandemic, then rose and stabilized in 2021–2022 around the previous low points of 1973–1974 and 1999–2000 at a poverty rate around 11.5% of the U.S. population.

The rise in poverty from 2008 to 2010 drove a reassessment of how poverty should be measured, aiming to capture a household’s ability to meet basic needs and counting their receipt of government benefits instead of only market income. This effort was led by Rebecca Blank, an academic economist who rose to leadership of the U.S. Department of Commerce in 2011. At that time the government began publishing a Supplemental Poverty Measure (SPM), drawing on decades of research and experimentation with different data sources. In 2019 the U.S. government decided to retain the simpler ‘official’ poverty line based on market income to determine program eligibility, while using frequently updated SPM procedures to track changes and disparities in poverty after receipt of program benefits.

The new methods introduced in 2011 aimed to improve measurement of change and differences among groups with as much similarity as possible on the initial baseline number and percent of all Americans living in poverty. Calibrating the supplemental measure so national totals would be like results using the official measure helped decision-makers focus on changes and differences, avoiding debates about whether the definition of ‘poverty’ was too high or too low a standard of living. Using the SPM for monitoring purposes instead of program eligibility is also helpful for decision-making, since it allows program benefits to be included in the new poverty measure, leading to the results shown in Fig. 7.2.

Fig. 7.2
A line graph of the percent of the U.S. population in poverty from 2009 to 2022 plots the supplemental poverty measure, including program benefits and the official poverty rate of market income only. The supplemental poverty measure was 15.1 in 2009 and 12.4 in 2022.

Source: Reproduced from Emily A. Shrider and John Creamer, Poverty in the United States 2022 [U.S. Census Bureau, Washington DC, September 2023]. Note methods changed in 2013 and 2017, creating breaks in the series that are artifacts of measurement instead of actual changes in those years. Updated publications in this series are at https://www.census.gov/library/publications/time-series/p60.html

U.S. poverty rates using official and supplemental measures, 2009 to 2022

Results shown in Fig. 7.2 reveal little difference between the two poverty measures from 2009 to 2019, as the rise of poverty rates at the start of the period was followed by the same gradual decline found by both measurement methods. The big change came during the COVID-19 pandemic, when the official poverty rate in terms of market incomes jumped up in 2020 due to job losses, and then stayed high due to increases in the CPI that raised the poverty line by about as much as incomes had risen. In contrast, the supplemental measure showed an accelerating downward trend in the poverty rate due to Federal spending on pandemic-response programs in 2020 and 2021, and a reversion to the pre-pandemic poverty rate when those programs ended in 2022.

The development and use of the supplemental poverty measure provides a much clearer picture of what payments and receipts move people into or out of poverty each year, revealing the important role of nutrition assistance and health spending. When the Census Bureau calculates the supplemental measure, they can incrementally remove each adjustment to market income and observe how many people would have been below the supplemental poverty line if that category of spending had not been present. The results are shown in Fig. 7.3.

Fig. 7.3
A positive-negative bar graph plots how millions of people moved out of or into poverty by spending from 2010 to 2022 on refundable tax credits, COVID-relief payments, SNAP with school meals, SNAP, housing subsidies, S S O, refundable child tax credit, school meals, T A N F, and child support received.

Source: Authors’ chart, using data for 2016–2019 extracted from Table A7 of Liana Fox [2020], the Supplemental Poverty Measure 2017 and 2019, and then Table B8 of Poverty in the United States 2021 and 2022 [various authors], all from U.S. Census Bureau, Washington DC. Data shown omit Social Security payments, which moved 26 to 29 million people out of poverty each year over this period, primarily Americans over 65 years of age. Updated publications in this series are at https://www.census.gov/library/publications/time-series/p60.html

Millions of people moved out of or into poverty by category of spending, 2016–2022

The bar chart in Fig. 7.3 is designed to show both change over time through the COVID pandemic and comparison between spending categories. Each category refers to a particular type of payment tracked by the Federal government, ranked in order of impact on the number of people in 2022. Details of each payment type are specific to the U.S., but somewhat similar patterns could be observed elsewhere. Here the vertical axis shows the number of people raised out of poverty, so negative numbers mean fewer people in poverty, and lighter colors in more recent years, so that the category labels are visible.

From the left of the diagram, the categories are tax credits on earnings paid when people file income taxes, whose impact on poverty declined during the period of COVID-related unemployment in 2020, then the burst of COVID relief payments in 2020 and 2021, as well as the use of SNAP to provide additional meals for children despite school closures in 2020–2022, then SNAP itself, followed by housing assistance through section 8 and other programs, the U.S. Supplemental Security Income (SSI) program for people with disabilities, the temporary tax credit per child in 2020 and 2021 that was allowed to expire in 2022, then school meals, the small remaining U.S. program of cash assistance known as Temporary Assistance to Needy Families (TANF, formerly known as ‘welfare’ payments), private child support received (typically from a non-custodial parent), unemployment insurance (which spiked up in 2020), programs to help with household utilities, energy for heating in winter, and worker’s compensation for injuries on the job, the special nutrition program for Women, Infants and Children (WIC), and a small new broadband assistance program.

On the right side of the diagram are payments made by people that might push them below the poverty line, notably the payment of child support, Federal income tax paid, work-related expenses such as uniforms and travel costs, payroll taxes to pay for social security and other programs under the Federal Insurance Contributions Act (FICA), and then medical expenses. These payments differ and fluctuate in ways that are extremely revealing about the nature of deprivation and poverty in the U.S. and potentially elsewhere. For example, by far the most important cause of falling into poverty before the pandemic was uninsured medical expenses on the right of the chart. That kind of expense became less burdensome due to the expansion of Federal health insurance and was particularly low during the pandemic when COVID displaced a large fraction of other health care services.

For this book the most important insight from Fig. 7.3 is the relatively large role of Federal food assistance. Adding up the effects of SNAP, school meals and WIC, those three programs accounted for 22% of the impact on number of people in poverty shown in the pre-pandemic period (2016–2019), then 18% during the period of peak pandemic aid (2020–2021), and over twice that fraction at 39% after most COVID aid was ended but food assistance rose in the most recent year (2022). The precise number affected by a combination of programs differs from the sum of their individual effects because some people participate in multiple programs, but food assistance clearly plays a very large role in anti-poverty programs in the U.S. as it does elsewhere. In 2021 the Thrifty Food Plan aspect of the SNAP benefits formula was adjusted upwards to ensure that recipients could afford to meet Federal dietary guidelines and other criteria, and the expansion of SNAP around school meals was continued while other pandemic aid was cut, which explains why the combined food assistance programs accounted for almost 40% of the numbers lifted out of poverty shown for 2022 in Fig. 7.3.

The supplemental poverty measure is particularly helpful to address disparities between groups. In the U.S. census and other surveys, respondents are invited to self-identify themselves in terms of several non-exclusive categories. These can then be used to compare groups such as the six categories shown in Fig. 7.4.

Fig. 7.4
A multi-line graph plots percent of people's poverty rates for American Indian and Alaska natives alone, any race of Hispanic, black alone and not Hispanic, all racial categories, Asian alone, two or more races, and white alone and not Hispanic from 2009 to 2022.

Source: Authors’ chart of data from appendix Table B-2, Number and Percentage of People in Poverty Using the Supplemental Poverty Measure by Age, Race, and Hispanic Origin: 2009 to 2022, in Emily A. Shrider and John Creamer, Poverty in the United States 2022 [U.S. Census Bureau, Washington DC, September 2023]. Note methods changed in 2013 and 2017, so changes at that year may be artifacts of measurement. Updated data and details are available at www.census.gov

Poverty rates using the supplemental measure by census category, 2009–2022

Labels for each line in Fig. 7.4 are aligned in sequence of each group’s poverty rate in 2022. Results for the American Indian and Alaska Native category are highly variable due to the relatively small number of survey respondents, and the Census Bureau reports a margin of error of ±4% around the reported level of 23.2% in 2022. In contrast, among people who report themselves to be Hispanic of any race, 19.3% were in poverty in 2022, and among respondents who classify themselves as only Black and not also any other race or Hispanic ancestry, 17.2% were in poverty in 2022, both with an estimated margin of error around 1%. Below that is the combined total of all people in the U.S., whose poverty rate is almost identical to that of respondents who classify themselves as only Asian and not also any other race, or the group of people who report multiple racial categories, and above the group who classify themselves as only white and not any other race or Hispanic ancestry.

The large drop and then rebound in poverty rates shown in Fig. 7.4, and the reduced disparity in poverty rates between groups to 2021 followed by an increase to 2022, clearly illustrates the value of tracking poverty in ways that closely follow the actual lived experience of every survey respondent in each group. Numbers capture only some aspects of life, but they allow us to compare groups in ways that count each person in the group equally, in contrast to the images or stories that are shared through commercial news or social media. The images and stories that we all see and remember were deliberately selected to attract and retain our attention. Every reader of this book will have different personal experiences, a different group of friends and acquaintances, and different news sources, all of which are important sources of information about individual lives. For questions such as disparities in U.S. poverty rates, totals and averages such as those shown in Fig. 7.4 are helpful because they add up everything that survey respondents themselves said when each person was asked the same questions. Thanks to the supplemental poverty measure championed by Rebecca Blank in the mid-2000s, we can track trends and disparities in the U.S. much more clearly than would otherwise be possible, as illustrated in Fig. 7.4.

The poverty data shown in this section are specific to the U.S., but their basic principles are useful for understanding how policies and programs affect whether a given person and their household fall below or above a country’s poverty line. Most importantly, sixty years of data using the official U.S. measure reveal how millions of children and adults are pushed into poverty during periods of economic downturn, while millions of others remain in poverty even after decades of economic growth. These data show how the number and percentage of people in poverty can be cut dramatically, as occurred in the 1960s and again in the 2010s, then most sharply through the one-time emergency programs in response to the pandemic during 2020 and 2021. The disaggregated data revealed by the U.S. Supplemental Poverty Measure are particularly helpful in revealing the importance of different government programs and policies, including especially the large role played by U.S. nutrition programs (SNAP, WIC and school meals) in lifting people out of poverty, and changes in disparities among groups that account for a wider range of entitlements and purchasing power than just market income counted in earlier poverty lines.

7.1.2.4 Global Poverty: International Comparisons and Trends for Africa and Asia

Looking across countries, each government sets its own national poverty line, and international organizations use data from each country for global statistics. The organization primarily responsible for measuring poverty is the World Bank, which hosts the global office of the International Comparison Program (ICP) that works with national governments to obtain local prices in each country for a standardized set of goods and services representing commonly purchased items in each region of the world. Comparing price levels for the same product in different places allows the ICP to compute purchasing power parity (PPP) exchange rates for every country, converting local currency into U.S. dollars of a given year. The validity of these calculations is limited by data quality and methodological concerns, but in principle each PPP dollar can buy the same quantity of goods and services in every country of the world. The World Bank and many others use PPP exchange rates to convert local prices to those international dollars and thereby compare total production of goods and services in each country. The sum of each country’s output is known as Gross Domestic Product (GDP). Once a country begins to experience economic development its GDP can grow exponentially for many decades, leading to extremely wide differences between countries in total production per year as shown on the horizontal axis of Fig. 7.5.

Fig. 7.5
A scatterplot of the national poverty line per day versus G D P per person per year on a log scale. It indicates the national poverty line at 2.15 dollars. The United States record the highest values followed by Canada, Germany, the United Kingdom, Italy, and so on are plotted on the graph.

Source: Reproduced from Joe Hasell, Max Roser, Esteban Ortiz-Ospina and Pablo Arriagada [2022], Our World in Data: Poverty [https://ourworldindata.org/poverty], using updated data based on Dean Jolliffe and Espern B. Prydz [2016]. Estimating international poverty lines from comparable national thresholds. The Journal of Economic Inequality, 14(2), 185–198. Data shown are national poverty lines per person per day for a total of 152 countries, at the country’s level of income as measured by Gross Domestic Product [GDP] per person per year, all converted from local currencies using PPP exchange rates into 2017 U.S. dollars. Observations are for the latest available year and range from 2001 to 2018. Countries are shown proportional to population with larger countries labeled for convenience. Shading refers to World Bank country groupings, which are [from left to right] lower, lower-middle, upper-middle and upper income. The horizontal axis is shown using a log scale, so that gaps from one to ten to a hundred thousand appear of equal width, due to exponential income growth over time that creates the large gaps shown

National poverty lines at each level of national income per person, 2001–2018

The vertical axis of Fig. 7.5 shows the national poverty lines used by country governments at each level of total output per person. Not all governments have an official poverty line, and many do not update them every year, so the chart shows the most recently published poverty line for each country at the country’s level of output in that year. Among the lowest levels shown is for Niger, whose national poverty line was set in 2014 at the local currency equivalent of $1.87 per day. Ethiopia and Benin have higher incomes but similar poverty lines in terms of real purchasing power, at $2.04 and $1.77, respectively. China has a much higher level of total output per person but maintains a low poverty line at $3.07, and countries above that level of total output tend to have much higher poverty lines, up to the level of the U.S. and other high-income countries.

The central insight from the data in Fig. 7.5 is that poverty lines set by national governments start at a floor around $2.15 per day in real purchasing power and are higher in countries with more output per person, especially where output exceeds about $10,000 per year. When the World Bank introduced its modern global poverty metric in 1990, they used the average of eight low-income countries’ national lines which happened to be almost exactly $1.00 per day in 1985 U.S. dollars. That same method has been updated with each successive round of PPP revisions, to $1.08 in 1993 U.S. dollars and then $1.25 in 2005 U.S. dollars, $1.90 in 2011 U.S. dollars and most recently the $2.15 line shown in Fig. 7.5, which is based on the average poverty line used by the 15 lowest-income countries of the world.

The existence of an extreme poverty threshold below which any person would be considered poor, originally set at $1/day in 1985 prices and now at $2.15 in 2017 prices, is closely related to the cost of food required for day-to-day survival, with some allowance for other expenses such as clothing, housing and transport. In 2020, a team of Tufts University researchers working with the World Bank and the Food and Agriculture Organization (FAO) used ICP price data from 170 countries in the year 2017 to compute the lowest possible cost of reaching nutritional goals using locally available foods. They found that, on average over all countries, meeting daily energy needs from the lowest-cost starchy staple would cost at least $0.79, or about half of the World Bank’s $1.90 extreme poverty line at that time. In actual practice, people living in extreme poverty typically spend 60–80% of their income on food, because they may not have access to the absolute lowest-cost items and usually combine their starchy staple such as rice or cassava with at least one type of more expensive food such as beans or a vegetable. A sufficiently diverse diet to meet all essential nutrient needs, however, is often prohibitively expensive even with the least costly of all locally available foods. At 2017 prices, a minimally supportive diet was found to cost a global average of $2.33 for a healthy adult woman’s estimated average requirements (EARs), $2.71 to reach her recommended dietary allowances (RDAs) and $3.75 for an overall high-quality diet as recommended in national dietary guidelines.

The World Bank’s international poverty line of $2.15 is clearly inadequate for meeting needs that people in higher-income countries have long considered essential such as a high-quality diet but counting the number and proportion of people below that extreme poverty threshold is still helpful to target services and track outcomes for the world’s most vulnerable people. In 1990, governments around the world signed on to the Millennium Development Goals (MDGs), aiming to halve the proportion of people in extreme poverty. That goal was achieved by 2015, at which point governments endorsed the Sustainable Development Goals (SDGs) that aimed to end extreme poverty by 2030. Progress towards that more ambitious goal has been interrupted by the COVID pandemic and associated economic downturn, but poverty reduction efforts have succeeded in the past and could do so in the future.

Data about poverty are itself a major constraint on the world’s ability to understand and address it. People in poverty are often geographically isolated, living in rural areas with few services of any kind. The most basic facts about their lives may not be recorded or remembered unless it is of direct use to them. For example, many people in very low-income places grow up without knowing their birthday: they never received a birth certificate and were not asked to provide the exact date until it was too late to remember. Communities in poverty are deprived of many things, including information about themselves and others to guide social services and political representation. This dimension of deprivation is known as data poverty, capturing the role of information in shaping our understanding of living standards and our ability to compare ourselves to other people.

For global poverty in terms of market incomes, comparable data are available from 1990 onwards, based on household surveys with local currency values converted into PPP terms. Earlier surveys used paper-and-pencil questionnaires, laboriously processed by hand using calculators and spreadsheets. Now interviewers often record peoples’ responses electronically and upload the results for automated analysis in near real time. Much of what is known about poverty still comes from face-to-face visits, but phone surveys and remote data collection are increasingly used, such as satellite imagery about lights at night. Geocoding allows analysts to link survey data about individuals with information about their environment such as local public services, market infrastructure and agroecological conditions. These changes have greatly enhanced our understanding of living standards and ability to improve them, raising a wide range of new questions. Ethical review prior to contacting individuals for data collection has become a high priority for scientific researchers, so survey designs are typically submitted to the Institutional Review Boards (IRBs) of both the organization carrying out the research and a governing body in the place where the survey will take place.

Figures 7.6 and 7.7 track results compiled by the World Bank’s global poverty researchers, using over 1900 surveys from 183 countries assembled in an online database known as the Poverty and Inequality Platform (PIP). Many of the underlying surveys ask respondents about their income, typically including taxes paid and benefits received as in the U.S. supplemental poverty measure, but for people in very low-income settings it is often more practical to ask about total spending over the previous month or year. The PIP database uses both income and expenditure surveys to estimate the number and proportion of people in each country below any given poverty line. Results shown here are for the currently applicable World Bank standard of $2.15 per person per day at 2017 PPP prices, starting with the number of people in Fig. 7.6.

Fig. 7.6
A multi-line graph plots the number of people living on less than 2.15 dollars per day from 1990 to 2019 in the world, Sub-Saharan Africa, South Asia, and East Asia and the Pacific. The number of people is trending in a downward in the world, in South Asia, and in East Asia and the Pacific.

Source: Reproduced from Joe Hasell, Max Roser, Esteban Ortiz-Ospina and Pablo Arriagada [2022], Our World in Data: Poverty [https://ourworldindata.org/poverty], using data from the World Bank Poverty and Inequality Platform [2022]. Data shown are estimated by World Bank researchers, based on income or expenditure reported by people in a total of 1939 surveys from 183 countries, with values in local currency in each year converted to 2017 U.S. dollars using purchasing power parity [PPP] exchange rates for comparison to the extreme poverty line of $2.15 per day

Number of people living on less than $2.15 per day in selected regions, 1990–2019

As shown in Fig. 7.6, in the early 1990s there were about 2 billion people in extreme poverty worldwide, of whom about half were in East Asia and the Pacific, largely in China, and a fourth were in South Asia, largely in India. By 2019, the worldwide total had been cut to under 0.7 billion, most of whom are in Africa. The near elimination of extreme poverty in East Asia took about 30 years, interrupted by two years of worsening poverty in 1997–1998. South Asia had a roughly constant number of people in poverty from 1990 to the early 2000s, after which its reduction parallels the trends elsewhere, whereas in Africa the number of people in extreme poverty continues to rise. The limited available data for later years suggest that the number in poverty rose during the pandemic years of 2020–2021 but could decline again afterward if national governments take appropriate action to control disease and reduce poverty.

Much of the change in numbers of people in poverty is due to differences in population growth, which varies widely by country and income level, so it is helpful to see the same data in percentage terms as shown in Fig. 7.7.

Fig. 7.7
A multi-line graph plots the percent of people living on less than 2.15 dollars per day from 1990 to 2019 in the world, Sub-Saharan Africa, South Asia, and East Asia and the Pacific. All the curves follow downtrend. The percent was high in Sub-Saharan Africa and low in East Asia and the Pacific.

Source: Reproduced from Joe Hasell, Max Roser, Esteban Ortiz-Ospina and Pablo Arriagada [2022], Our World in Data: Poverty [https://ourworldindata.org/poverty], using data from the World Bank Poverty and Inequality Platform [2022]. Data shown are estimated by World Bank researchers, based on income or expenditure reported by people in a total of 1939 surveys from 183 countries, with values in local currency in each year converted to 2017 U.S. dollars using purchasing power parity [PPP] exchange rates for comparison to the extreme poverty line of $2.15 per day

Percent of people living on less than $2.15 per day in selected regions, 1990–2019

The poverty rate data in Fig. 7.7 reveal that, as recently as 1990, about 38% of the whole world’s population and 66% of people in East Asia and the Pacific were living in extreme poverty. By 2019, the global rate had been cut to below 8.5%. In South Asia, the percentage of people in extreme poverty was cut from 50% in 1990 to about the global average in 2019. In Africa, the extreme poverty rate peaked in 1994 at 59%. During the 1994–2010 period Africa’s poverty rate declined in parallel to declines in South Asia and East Asia and the Pacific, then continued at about the same rate and did not experience the accelerated declines that occurred in Asia shown in the late 2000s and early 2010s. The terrible setback due to COVID in 2020–2021 and the difficult recovery since then is not shown in Fig. 7.7 but can be monitored using the survey data assembled by the World Bank and other researchers.

7.1.2.5 Inequality, Lorenz Curves and the Gini Index

Many aspects of economic and social life are shaped by inequality, below and above any poverty line. The degree to which incomes are concentrated among a few people within any population can conveniently be measured using Lorenz curves defined in Fig. 7.8. The curves, first drawn by Max Lorenz and published in 1905 while he was still a student at the University of Wisconsin, allow all kinds of distributions to be visualized and compared in a standardized manner. His insight was to transform the data into cumulative proportions of all people and their total income, so that the number of people and units of measure would not influence the results. A perfectly uniform distribution with complete equality would be drawn as a diagonal line, along which each additional person accounts for the same proportion of income. Soon after Lorenz showed how distributions could be drawn using standardized curves, in 1914 the Italian statistician Corrado Gini published the idea that inequality could be summarized by the area between a Lorenz curve and that line of equality, as shown with real data for the U.S. in Fig. 7.8.

Fig. 7.8
A line graph of the cumulative share of income versus the cumulative share of people adjusted for household size for equality, after taxes, and market income. Equality, after taxes, and market income trend in an increasing pattern. Gini index = A over A + B = 0.467 for market income and 0.417 after taxes.

Source: Authors’ chart of data from Table B-4 in Gloria Guzman and Melissa Kollar, Income in the United States 2022 [U.S. Census Bureau, Washington DC, September 2023]. Updated publications in this series are at https://www.census.gov/library/publications/time-series/p60.html

Lorenz curve and Gini index for income before and after taxes in the U.S., 2022

The Lorenz curves shown in Fig. 7.8 are drawn for money income in the U.S., pooled within households and counted for individuals using the same adjustments for household size and composition that were developed for the supplemental poverty measure. The chart contrasts Lorenz curves for income before and after taxes, which for this calculation counts Federal and state income taxes and credits or rebates, as well as U.S. payroll taxes (FICA). The Gini index is calculated as the population’s cumulative gap between equality and their Lorenz curve (area A), as a fraction of complete equality (area A + B). That index ranges from zero, if there were perfect equality, to one if there was complete inequality where only a single person earns any income.

The Gini index, also known as the Gini coefficient, is a very convenient summary statistic, but like any summary it omits potentially important information. For example, the Gini coefficient does not distinguish between inequality at the top or at the the bottom of the income distribution, so the U.S. Census Bureau and others typically complement it with a variety of other data to answer more specific questions. As a person, Corrado Gini himself has been harshly judged by history due to his support for fascism and eugenics, but by coincidence the name of his index can also be read as the acronym for a General index of inequality.

The simplicity and clarity of Lorenz curves make the resulting Gini coefficients the most widely used measure of inequality across countries and over time. Figure 7.9 uses a large collection of these ratios estimated using comparable methods in a wide range of countries over many years, plotted against the country’s national income per person. Here, the horizontal axis differs slightly from the measure of each country’s total production (GDP) shown earlier, because here we focus on gross national income (GNI) which includes not just production within the country, but also remittances and other income from abroad, net of payments to foreigners, again on a log scale in the horizontal axis of Fig. 7.9.

Fig. 7.9
A scatterplot of the Gini coefficient in 100 = most unequal versus gross national income per person in U S dollars per year at 2017 P P P prices for Southern Africa, Latin America, the United States, and all other countries. The Gini coefficient was high in Southern Africa.

Source: Authors’ chart of data from World Bank estimates, from https://databank.worldbank.org. Data shown are a total of 1353 observations from 137 countries in each year for which both Gini coefficients and GNI are available. Gini coefficients are estimated from household survey data by World Bank researchers and denoted SI.POV.GINI. Gross national income per person at PPP prices is estimated from national accounts and denoted NY.GNP.PCAP.PP.KD

Income inequality at each level of national income per person, 1967–2018

Showing a very wide range of Gini coefficients values on single chart is helpful to address a common hypothesis about inequality that was first formulated by Simon Kuznets, whose early observations of economic development led to a paper in 1955 suggesting the possibility of an inverted-U relationship between a country’s inequality level and its average national income per capita. The Kuznets hypothesis is based on the idea that in very low-income countries almost everyone might be near the floor level of subsistence, so there would be little inequality because everyone is poor. Then as some people in that country get rich inequality might increase, until others catch up and the distribution becomes more equal at a higher level of income. Kuznets himself saw the hypothesis as a conjecture, to be tested over time as better data became available.

What Fig. 7.9 reveals is that, at least for the modern era since 1967, there is no general inverted-U relationship when looking across the world as a whole. Instead, there is a wide range of Gini coefficients at each level of income and strong regional clustering. At the top, the highest levels of inequality are observed in the five countries of Southern Africa, Botswana, Eswatini (formerly Swaziland), Lesotho, Namibia and South Africa. These are countries dominated by the history of apartheid, by which European settlers seized land and severely limited all economic opportunities for indigenous Africans until the 1990s. The next highest group are the 18 countries in this dataset from Latin America, namely Argentina, Belize, Bolivia, Brazil, Chile, Colombia, Costa Rica, Dominican Republic, Ecuador, El Salvador, Guatemala, Honduras, Mexico, Nicaragua, Panama, Paraguay, Peru and Uruguay. The history of those countries was also dominated by European settlers who seized land and limited opportunities for indigenous people. A third outlier is the U.S., which has much more inequality than other countries at the same level of income.

The pattern shown in Fig. 7.9 reveals how countries other than the settler societies of Southern Africa, Latin America and the U.S. have a downward sloping pattern from values between 30 and 50 among the poorest countries, to values between 20 and 40 among richer countries. Kuznets was right to be skeptical: there turned out to be no inverted-U in the modern era, just a wide range of variation around the world and over time and a modest tendency for higher-income countries to have less inequality. The cross-sectional pattern has an apparent inverted-U only because the Southern African and Latin American countries that were conquered and settled by colonialists are now in the middle-income range today.

7.1.2.6 Inequity and Disparities by Gender, Ethnicity, Nationality and Race

So far, we have seen how data from individuals and households can be added up and compared over time and among countries and regions, including the example of disparities between racial and ethnic groups in the U.S. shown in Fig. 7.4. Inequity between demographic groups, sometimes called horizontal inequality, played a major role in agricultural history and remains a central concern in modern agriculture and food systems.

In Southern Africa and the Americas, the colonial conquest and slavery that gave rise to the inequality shown in Fig. 7.9 were often practiced explicitly for the purpose of controlling agricultural land and labor, preventing enslaved people and colonized lands from being used for self-employed family farms. Control of agriculture took different forms in other regions, for example through concentration of land ownership by inheritance so that others had no choice but to work as tenant farmers, giving up a large share of each year’s harvest to landlords. Those systems were sometimes overthrown in violent revolutions, with land reforms and other efforts to equalize access to land and allow people to work for themselves, but social relations remain marked by ancient agricultural practices all around the world.

The term inequality generally refers to differences among individuals or households, while inequity and disparities generally refer to differences between groups that are unjust and undesirable. Historically, a very wide range of criteria have been used to segregate and discriminate around the world, creating barriers to social inclusion that persist in each region. The categories used in the U.S. census shown in Fig. 7.4 illustrate some of the ways that groups are formed. In the U.S., the main categories offered to respondents in the 2020 census were American Indian and Alaska Native (ancestry present before colonial settlement), white (ancestry from all parts of Europe and the Mediterranean or Middle East), Black (everyone of African ancestry, both descendants of enslaved people and also immigrants), Asian (often but not always more recent immigrants from East, Southeast or South Asia) or Hispanic (a designation typically selected by people of Spanish-speaking ancestry from the Americas). All these categories have vague boundaries today and are self-declared by the survey respondent, but they trace their origins to sharp divisions involving violent conquest and legally enforced limits on what people in marginalized groups could do.

The legacy of past and ongoing discrimination between groups is clearly visible in agriculture and nutrition worldwide, as advantages or disadvantages are transmitted and shared leaving some groups with fewer resources of all kinds, while others accumulate high levels of wealth, education, social and political connections as well as physical health. Resources of one kind are commonly used to build other strengths, and deprivation in one dimension has costs in other realms as well. Various kinds of social inclusion or exclusion may overlap, creating new kinds of privilege and injustice at the intersection of multiple social identities.

Boundaries of social groups and barriers to inclusion that people face differ greatly by country and region of the world, and may be based on distinctions of ancestry, religion or other factors that exist only in that place. Racial and ethnic categories are also periodically redefined, for example through the different questions asked in each successive U.S. census. In some countries like the U.S. there are explicit nondiscrimination rules, or countries like India have reservations or quotas in favor of previously excluded groups, and there are also countries like France or Germany and Rwanda where information about ancestry was so violently abused to commit genocide in recent memory that asking about ancestry is now illegal or strongly discouraged.

One of the few inequities that can be traced using internationally comparable data over long periods of time is the gender gap in earnings. People usually live together in households and pool resources to some degree, but the autonomy and power of each person within a household depends in part on what they can earn through outside employment, and throughout history almost all societies have been organized to offer higher wage employment for men than for women. Data on that gender gap in earnings are shown in Fig. 7.10.

Fig. 7.10
A multi-line graph plots the gender earnings gap among full-time employees from 1970 to 2022 in Korea, Japan, the United States, and Norway. All the curves follow a downtrend. The gender earnings gap was high in Korea and low in Norway.

Source: Reproduced from OECD, Gender wage gap indicator [https://doi.org/10.1787/7cee77aa-en]. Data shown are male minus female, as a percent of male, using median earnings of all full-time employees. Updated versions of this chart are at https://data.oecd.org/chart/7bUQ

Gender earnings gap among full-time employees in selected countries, 1970–2022

As shown on the vertical axis of Fig. 7.10, the male–female gap in earnings of full-time employees ranges from under 5% to almost 50% of male wages. The gray background lines show trajectories for the 40 countries with available data, in addition to the 4 countries highlighted. Many countries have noisy data with sharp rises and falls that are likely to reflect measurement error, but the four highlighted examples illustrate provide a clear indication of how countries differ in the level and trends of the gender wage gap. All four countries highlighted in Fig. 7.10 have greatly narrowed the gap, with notable differences in the speed at which opportunities for men and women have converged. Summary statistics like Fig. 7.10 do not tell us anything about the causal mechanisms behind social change, but observing these patterns demonstrates that societies differ in many important ways, and that large disparities such as the gender gap in wages can be reduced over time.

7.1.3 Conclusion

This section describes the economic toolkit used to measure poverty, inequality and inequity, starting the second half of the book with example data from the U.S. and worldwide. In each case, we focus on data visualization, using line graphs or bar charts and scatterplots to put all available observations of that thing on one figure. The charts of data presented in this section summarize what millions of survey respondents had to say about their lives, counting each one equally to provide an overall picture of trends over time and differences between countries, regions or groups.

Some of the charts shown in this section are images reproduced directly from the source, while others are original data visualizations created for this book to show standard data in a new way. In all cases, readers can go to the source mentioned in the note below each figure to learn more about how each variable was constructed and obtain updated information if available. Data about variation within individual countries like the U.S. are usually best obtained directly from their national statistical agencies such as the Census Bureau, while cross-country comparisons often come from international organizations that work with data from their member countries, such as the World Bank for poverty measurement and the OECD for monitoring gender gaps.

The charts made or chosen for this book aim to provide the broadest, most meaningful and accurate possible picture of the concept to be illustrated. Online access to data is now such that observers can understand the world by combining all the available data to see a bigger picture than was previously possible. In the past people had to zoom in, choosing specific examples in hopes that those would represent a larger truth. Now we can zoom out, showing differences between whole countries or continents over time, using all available data to limit the problem of selection bias in what we would otherwise be able to see from our own individual perspective.

Measuring and comparing levels of poverty, inequality and inequity is challenging but not impossible. Great progress has been made thanks to innovators who developed new and better measurement tools, and then the vast number of data collectors, respondents and analysts who provided the information that was then transformed into the final data we see. Compiling these data accurately is difficult and expensive. The information is in the public domain, and the agencies responsible for data collection and analysis are not always sufficiently well supported, but the sources shared in this section provide a remarkable picture of the partially completed task of eliminating poverty, inequality and inequity in the U.S. and around the world.

7.2 Vulnerability, Resilience and Safety Nets in the Food System

7.2.1 Motivation and Guiding Questions

The previous section focused on differences among people, and the resulting inequality and inequity in the food system and the economy as a whole. For any one person or household, how can we understand variation over time? How do we all protect ourselves against random events like illness or the weather?

Vulnerability to risk plays a major role in agriculture and food systems, worsening poverty and malnutrition. For any one event it is usually impossible to distinguish luck from other factors, but farmers and others can learn from experience how to protect themselves from adverse events. Can interventions help people manage risks and thereby improve outcomes?

Farmers and other people protect themselves to some degree by diversifying activities among different risks and by holding stocks of food to protect against shortfalls. With certain kinds of risk, people might be able to pay in advance for private insurance or obtain help through informal social insurance among members of an extended family and other mutual aid groups. Many people are also helped by public-sector insurance and safety nets, commonly known as social assistance. Most importantly, people can sometimes save and invest in productive activities that provide increasing wealth over time, which they use to avoid or protect themselves from every kind of risk. This section explores how each path can help people escape from poverty and deprivation described in the previous section of this chapter.

Food economics focuses on risk management in part because each person needs roughly constant amounts of food every day, whereas agricultural production is seasonal and fluctuates randomly. Farm households must manage production risks and meet their own consumption needs, while many nonfarm enterprises engage in food storage and transport to bridge times and places when food is more scarce or less scarce. Nonfarm consumers also face food insecurity, usually because of variation in their individual earnings or nonfood expenses, but also when their entire community faces food scarcity and price spikes. Managing every one of these risks involves some combination of individual resilience, insurance of various kinds and ultimately some kind of social assistance.

By the end of this section, you will be able to:

  1. 1.

    Define and compare risk and uncertainty, risk aversion, vulnerability and resilience;

  2. 2.

    Describe how diversification, savings and insurance are used to protect against risk;

  3. 3.

    Explain why insurance is available for some risks but not others, using the concept of asymmetric information and possibility of adverse selection or moral hazard; and

  4. 4.

    Describe and summarize results of how food insecurity and other aspects of vulnerability and resilience are measured in the U.S. and around the world.

7.2.2 Analytical Tools

This section addresses the role of uncertainty and risk for farmers and food consumers. These terms are sometimes used interchangeably, but they can also be given more precise meaning. Most often uncertainty refers to lack of knowledge in general, whereas risk refers to situations where people have learned something about the probabilities and magnitudes of each possible outcome, such as a short-term weather forecast where people know the risk of rain later than same day.

In situations of extreme uncertainty, people have no evidence at all about probabilities or magnitudes, so people’s choices are purely a matter of faith. Economic analysis of risk begins when we have some evidence about the likelihood of dangers and opportunities ahead. The probabilities and magnitudes of each outcome are always uncertain and likely to change over time, but people can learn from experience and make choices based on the possible outcomes they anticipate. Analysis of risk often focuses on the probability and severity of possible harms, balanced by interest in the probabilities and potential gains from favorable events.

The degree to which a given adverse event causes harm is a person’s vulnerability to that danger, and the opposite of vulnerability is resilience. Like other terms, vulnerability and resilience are sometimes defined narrowly. Vulnerability can be used to mean that the risk itself is higher, for example that droughts or floods become more frequent, and resilience can be used to mean only recovery after outcomes have worsened, for example replanting a field after it was destroyed.

One aspect of poverty is high vulnerability to risk, and those vulnerabilities may be so extreme as to create poverty traps that push people back into poverty even after favorable events have occurred. More generally, even at higher-income levels most people are risk averse to some degree, meaning that people prefer greater certainty around whatever average outcome they may face. All these concepts play an important role in agriculture and food systems as described in this section.

7.2.2.1 Example Time Paths of Wellbeing for Low-Income Farm Families

To visualize the role of all kinds of risk in relation to poverty status, it is helpful to use a chart of possible trajectories drawn in Fig. 7.11.

Fig. 7.11
Three graphs exhibit hypothetical trajectories of farm household living standards over several years. The poverty lines are shown as 100 to see percent differences, poverty traps, resilience, vulnerability risk, depth and duration of poverty spells can vary, and growth are indicated on the graphs.

Hypothetical trajectories in and out of poverty over time

Trajectories over time are rarely observed with sufficient frequency to see month-to-month changes in total income, consumption or expenditure, so the examples in Fig. 7.11 are purely hypothetical for the purpose of visualizing the basic terminology of risk. The scenarios shown tell the story of seasonal fluctuations experienced by very low-income farmers with a single harvest each year, but the same concepts would apply to other people facing other kinds of risk.

On the vertical axis of the three panels we have an index of consumption or wellbeing that starts at 100 in June of some year. Each panel then traces a sequence of harvest seasons that occur in the last few months of each calendar year, followed by a ‘lean season’ when wellbeing is typically lowest when stocks from the previous harvest are running out. The stylized scenarios in Fig. 7.11 are examples to illustrate some of the terminology needed to discuss risk management. In each case there is a contrast between two trajectories, with the lighter shaded line having a more advantageous outcome, and in each panel the second harvest is better than the first or third harvests.

The top panel of Fig. 7.11 shows a situation of chronic poverty, where seasonal fluctuations affect only the depth and duration of lean seasons before each harvest. The person’s wellbeing is traced by the dark solid line, and even a successful harvest in the second year lifts the person out of poverty only temporarily along the light-colored line. Persistent poverty can take many forms, but a common situation might be that the person has no secure and rewarding way to save and invest the proceeds from a good harvest. Economists use the term poverty trap to investigate possible causes of persistent poverty and ways to escape it, such as offering sufficiently rewarding opportunities for successful harvests to generate long-term gains over time.

The lower left scenario in Fig. 7.11 contrasts a situation of high vulnerability to recurring poverty in the dark dashed line with resilience in the lighter dashed line. Here the term resilience is used in a very general sense to mean protection against falling into poverty, without necessarily any gains in good years or improvement over time. In the lower left panel, resilience in the lighter dashed line takes the form of avoiding a decline into poverty during the lean seasons, for example thanks to improved crop storage.

The right scenario of Fig. 7.11 shows the possibility of a growth trajectory in which the proceeds of each harvest are reinvested to improve outcomes over time and lift the person permanently out of poverty. In this case the lighter dotted line shows a more stable trajectory within each year which would be desirable, but both paths have a similar growth rate from year to year and similar endpoints in the long run. All high-income communities emerged from poverty this way at some point in their history, and some can have sustained exponential growth for many decades. For farm families and agricultural communities that become wealthy, part of the story is increasing productivity per acre or hectare of land, but since total land area in each place is limited the growth path requires having many farmers switch into nonfarm work so that the remaining farmers can take over their land.

7.2.2.2 Risk Management Strategies: Diversification, Precautionary Savings and Insurance

A first strategy to manage risk is diversification, for example by farmers who plant a variety of different seed types in different ways, and have livestock and nonfarm activities. Diversification is a form of self-insurance, by which people avoid betting too much on any one proposition, even if people know it would have higher payoffs on average in the long run. For example, a farmer may know that keeping cows for dairy is more profitable on average than anything else they could do with their land and labor, but an episode of illness or other problems would be devastating, so farmers usually cannot start dairying unless they have enough wealth or other income sources to offset the risk of a bad outcome.

When diversification takes a producer away from their higher-growth options, putting resources into additional activities to spread risk provides resilience at the cost of lower growth. In some cases, however, diversification also supports growth through complementarity among activities. For example, farmers growing a cereal grain that uses nitrogen often rotate or combine it with a nitrogen-fixing legume like cowpeas or soybeans, because the agronomy of soil nitrogen favors rotation or intercropping of both crops on the same fields. Crop-livestock integration can be another source of complementarity, using crop residues as feed and returning the manure to fields.

Diversification to limit risks and complementarity to increase total output are both helpful only to a limited degree. Most farmers choose to focus on a just a few different crops or animal products, perhaps two to five different species, although some farms that serve consumers directly or grow food for themselves might produce a dozen or more different kinds of vegetables and other crops, and keep different kinds of animals. For livestock and crop enterprises with scale economies, increasing returns can lead farmers to focus on just one species as in specialized dairy or cattle operations and sugar or tea plantations, but those returns may be more variable making specialization affordable only to farmers with relatively high wealth or other ability to absorb risk.

At each level of diversification or specialization, an important strategy to manage risk is precautionary savings or storage, simply to hold over some output from good times into bad. In very low-income settings, there may be few ways to store grain or save money securely, so improvements in storage and savings can be very helpful to limit downside risk even if they do not result in long-term growth. If productive investment opportunities are available, however, then even a person’s seasonal or precautionary savings can be used to fuel growth.

For some kinds of risk people can acquire insurance, paying in advance to fund a pool of resources from which each person is paid when a bad outcome has occurred. Informal kinds of social insurance are an important feature of all societies, as people in extended families and other groups provide mutual aid to each other in times of need. In those settings, even the lowest-income members of the group often share some of what they have, and those who are more fortunate are expected to provide for others.

Social insurance can be formalized to some degree, for example in rotating schemes among neighbors or friends where each member agrees to contribute something each week or month. That creates a pool from which one member draws, either in times of need or on a regular basis. When withdrawals are on fixed schedule, for example a group of twelve people who contribute $10 monthly until their designated month when they receive $110, the pool serves as a rotating savings and loan society. When withdrawals are based on need, for example burial societies to which people pay each month and receive help for funerals, the pool serves as both savings and insurance.

Formal insurance schemes can operate as nonprofit social enterprises or as for-profit businesses, sometimes organized as a ‘mutual company’ owned by its customers. All insurance providers ask people to pay a lump sum in advance or a regular premium in exchange for a given level of coverage. Insurance of that type can be provided for only certain kinds of risk. The fact that people can buy insurance for some risks but not others is a familiar fact that we all may take for granted, simply assuming that some risks are insurable while others are not, but insurance provision differs across countries and can change rapidly when new technology or other innovations alter the kind of risk that can be insured.

For some risks, formal insurance is optional and people can choose to buy it, such as insuring against breakage or theft of property. In agriculture, the oldest and most universal example is insuring a field of crops against damage from hail, which was among the earliest formal insurance plans introduced in Europe in the late eighteenth and nineteenth centuries. For other risks, insurance is provided by private enterprises but required by law, such as automobile insurance in most countries, and a few kinds of risk are typically insured directly by governments, such as unemployment insurance. All three kinds of insurance are commonly observed to help pay for health care services. Some health insurance is provided directly by governments, some is provided by private enterprises to everyone under a government mandate, and some is provided privately if people choose to pay for it. The role of government mandates and public insurance alongside private insurers is crucial aspect of risk management in agriculture and other domains.

7.2.2.3 Market Failures in Insurance: Adverse Selection and Moral Hazard

Economists explain the market for insurance as a problem of limited information about the risks faced by each person. If the insurance provider could easily assess the probabilities of each outcome, they could calculate the expected value of payouts over time. Expected value is the probability of each outcome multiplied by its value. For example, in nineteenth-century France if a field’s risk of being destroyed by a hailstorm each year were one in a thousand, and the payout to cover the crop’s value were ten thousand francs, then an annual premium of ten francs would exactly cover the expected value of that risk. An insurer with a thousand such customers would pay out once each year on average and exactly break even in a normal year. To be more confident of breaking even each year they would need a larger number of customers, and to cover a few bad years in a row they would need financial reserves. Such an insurance plan would be actuarially fair, meaning that a nonprofit or mutual insurance company could arise and persist indefinitely, and a for-profit insurance provider might be able to charge even higher premiums but still find customers whose risk aversion makes them willing to pay more than the expected value of the risk they face.

The fundamental market failure that causes insurance to be provided for some risks but not others is asymmetric information between each customer and the insurer. One kind of information asymmetry is hidden attributes affecting risk that only the customer knows, for example if a farmer knew that certain fields were more vulnerable to hailstorms than other fields. Another aspect is hidden actions by the customer, for example if a farmer who had bought insurance then chose to plant riskier crops in ways that the insurer cannot observe. Insuring a standing crop against damage from hailstorms emerged early and persists everywhere in part because there is almost no asymmetric information about that kind of risk. There is little that farmers can know or do that would alter the odds of being hit by a hailstorm, which then destroys all standing crops in the place where it hits. An insurer who has observed hail damage for a many years can guess the odds and issue the plan, receive premiums, verify claims by visiting each field after a storm to see that the crop was in fact destroyed, pay compensation and continue to operate for many years. If the insurer is a relatively small company serving a limited area, covariance among their customers’ risks creates the possibility of many claims in a single year. Each local insurer’s risks can then be pooled in a market for ‘reinsurance’ whereby they are compensated by a larger, different insurance company in the event of extreme losses. The market for reinsurance is also limited by asymmetric information and works only when the reinsurer is confident that the local insurer does not know more about their risks than its reinsurer, or takes on more risk after they are reinsured.

Insurance plans for farm risks beyond hailstorms are often offered and can succeed to the degree that they overcome the market failures caused by asymmetric information. When there is hidden information about their risks that customers know but insurers cannot see, the cause of market failure is adverse selection. As that term implies, the problem is that customers with higher risks will be able to self-select into buying insurance. When there are hidden actions that customers might take that increase risk once they have insurance, the cause of market failure is known as moral hazard. That term arose in the nineteenth century when insurance providers argued that riskier behavior was immoral. The language used to explain how asymmetric information causes market failure is similarly colorful, as both adverse selection and moral hazard routinely cause insurance markets to ‘unravel’ in a ‘death spiral’ towards bankruptcy unless governments intervene.

Information asymmetries that cause the unraveling of insurance markets can be illustrated by the many attempts to create agricultural insurance for fire damage, crop yields or livestock survival. The oldest of these is fire insurance. Returning to our example of nineteenth-century France, if an insurer’s survey of past fires shows that one in a thousand farm buildings burn down every year, each causing more than ten thousand francs of damage, to cover their costs they might need to charge an annual premium of eleven francs for a payout of ten thousand francs in the event of a fire. Farmers who know they have lower than average fire risks would not sign up so only higher risk customers enroll, which is adverse selection. Furthermore, those farmers who do enroll might take less care to avoid fire, which is moral hazard.

The effects of adverse selection in enrollment, and of moral hazard among those who have enrolled, are a predictable unravelling of the market over time. After launching what appears to be an actuarially sound insurance plan, adverse selection leads to only those with high fire risk to sign up, and moral hazard might lead some of them to incur even higher risks because they have insurance. The result is a higher probability of fires among the insured population, for example on average two in a thousand buildings might burn, so the plan is no longer actuarially sound. The insurer loses money on average but might stay in business and raise their premium above twenty francs. That does not solve the underlying problem, however, because now only those whose risks are higher than two in a thousand would sign up and once insured, they might do riskier things, so the insurer might then find that three in a thousand insured buildings are burning. Raising their premium again to above thirty francs would just worsen the problem.

Experienced insurers anticipate the problem and avoid introducing plans that face asymmetric information, but it is not always possible to predict whether adverse selection or moral hazard will occur. It may also be possible to fix the information asymmetry. In the case of fire insurance, the losses are so devastating that people have a very strong incentive to make insurance work. Early fire insurers in nineteenth-century Europe discovered that they could make plans sustainable by employing fire inspectors to verify that customers have precautions in place before the plan is issued and by employing fire investigators who authorize payouts only if they can determine that the cause was not negligence or another moral hazard. Private enforcement of these rules by insurance companies is only partially effective, so in the twentieth century governments increasingly intervened with building inspectors who enforce fire safety codes and fire investigators who determine the cause of every fire. Those public services, along with firefighters who limit the damage when fires occur, then allow more diverse private companies to compete and offer lower-cost fire insurance to everyone in the areas covered by the government’s fire prevention programs.

Crop and livestock risks are extremely important for farmers, but insurance providers have rarely been able to overcome asymmetric information enough to make plans sustainable. Instead, the importance of those risks for farmers has sometimes led governments to intervene by introducing subsidized plans, expecting to cover only some of the plan’s losses. If the underlying market failure is not addressed, however, then the death spiral runs in reverse as the government payout grows over time as increasingly high-risk, low-return activities are enrolled in the plan. For example, from the 1930s until the 1980s, the U.S. government offered only very limited crop yield insurance for which only some U.S. acres were eligible and were enrolled. In 1994 and then in 1996, new policies authorized payment to support insurance plans covering a wider range of losses, including not just yield but also total revenue. Farmers responded by enrolling a larger fraction of riskier acreage. Each successive round of policy change has allowed payouts to grow, feeding an upward spiral towards enrollment of almost all eligible acres and government absorbing a larger fraction of program payouts. The program still uses the terminology of insurance, but payouts became so frequent and predictable that farms came to rely on these plans for regular revenue, not just in exceptional years.

Technological innovations can sometimes overcome asymmetric information and solve the underlying market failure, allowing new insurance markets to emerge. For example, remote sensing of weather conditions has led to many experiments with ‘parametric’ insurance, where payouts are triggered by an index of specific conditions such as prolonged drought. Payouts can even be triggered by forecasts, leading to ‘anticipatory’ payments to farmers that might help them escape the harms caused by extreme weather. Whether this kind of payment can be sustained depends in part on whether the plan is actuarially sound from the start, meaning that its expected payout is covered by its revenues, but also that the plan avoids both adverse selection and moral hazard over time.

Government intervention can help solve insurance market failures in several ways. One approach is to address the adverse selection and moral hazard problem directly, as in the example of fire codes, fire inspectors, fire investigators and fire fighters, all of whom work together to limit fire risk and make it insurable for everyone. Another approach is an insurance mandate, overcoming adverse selection by ensuring that people at all risk levels pay for coverage. The mandate can be universal, as in automobile accident insurance, or based on any criterion other than the person’s health risks, such as all employees of a company as in the U.S. system of employer mandates. In each case, insurance mandates are usually accompanied by efforts to reduce the risk itself and limit risky behavior through policing, for example regarding auto safety and traffic laws, which can itself improve lives and makes the remaining risk insurable at lower cost to consumers.

Extending insurance to a wider range of dangers can reduce the role of randomness in life but ultimately covers only a fraction of the risk that people face, making risk management a central problem for all enterprises and every household.

7.2.2.4 Risk Aversion and Risk-Reward Choices in Production

People differ in their attitudes to risk, based in part on their beliefs about probabilities and impacts, but also on their wealth and resilience. One way of picturing a person’s attitude to risk is by imagining a utility function, capturing the usefulness of income and consumption expenditure to reach higher levels of subjective wellbeing. The utility of income includes purchase of goods and services such as housing, food and so forth that help a person achieve all of their goals including health and longevity, education and knowledge, care for one’s family and gifts to others. An example utility function is shown in Fig. 7.12.

Fig. 7.12
A graph of expected utility of a risk differs from its expected value depicts utility U derived from income versus income Y. It plots a curve and a line trending in an increasing pattern. (income lose, utility lose), (C E of U, E V of utility = .5 U lose + .5 U win), and (income win, utility win) are indicated on the curve.

Risk aversion reflects higher priority needs at lower levels of income

The solid curve in Fig. 7.12 shows a utility function whose bowed-up shape captures the risk aversion that people often (but not always) reveal in their choices. As income increases from left to right the curve is steeper at first, indicating how increments of income are spent on higher priority needs, and the curve eventually becomes flatter indicating diminishing marginal utility of income. Additional income remains useful as shown by the positive slope throughout the range.

In the situation shown, a person is considering a 50–50 gamble whose payoffs are shown by the two dots. The dashed line connects those two dots, showing the location of the hollow square and hollow circle used to show the effect of risk aversion on behavior.

The horizontal axis shows income from the gamble if they lose or win, the expected value of which is half-way between the two levels of income. For example, a coin toss for $100 or nothing has an expected value of $50. If a person did this many times, on average they would earn $50 in addition to the base level of income when they lose, denoted along the horizontal axis as EV(Income).

The vertical axis shows the usefulness of income for wellbeing, which is bowed up to show how a person would meet their highest priority needs with their first increments of income, and each additional unit of income after that would be spent on things with diminishing marginal utility for their wellbeing. The curve shows this person’s level of wellbeing at each level of income if they lose or if they win. With 50–50 odds, the expected value of utility is half-way between the two levels along the vertical axis as shown by the dashed line, denoted EV(Utility).

The hollow circle along the utility curve shows both the subjective usefulness of the gamble for this person along the vertical axis and the amount of income that would be equally useful to them if obtained with certainty. This value along the horizontal line is known as the certainty equivalent (CE) value of the gamble. If the person could be guaranteed that certainty-equivalent value, it would have the same expected utility for them as the expected utility of the gamble. In monetary terms the CE of utility from the gamble, denoted CE(U), is lower than the expected income, EV(Income), because this person has higher priority needs for small increments of income than for further increments at higher-income levels.

Risk aversion has enormous practical importance for agriculture. For example, if the gamble in Fig. 7.12 were adoption of a risky new farm technology, the farmer’s utility from it would be lower than the expected value of the payoff. Farmers in this situation might miss out on a growth opportunity due to the consequences of experiencing a bad year. Many people routinely make choices that reveal risk aversion of this type, showing a preference for greater certainty even if the average payoff is lower, whenever they have high priority needs for additional income over the relevant range.

When many people in a society show risk aversion towards certain kinds of activity such as new technology adoption, we observe a risk-reward tradeoff where higher risk activities offer higher payoffs on average. But people do not always show risk aversion, especially for risks that are harder to assess and in situations where short-term emotions rather than long-term wellbeing drive decision-making. For example, it is very difficult to assess risks when comparing outcomes with very small probabilities, such as the odds of winning a lottery. It is also difficult to assess risks when emotions cloud judgment, as in sports and other competitions. In those situations, people routinely show risk-loving behavior, where their certainty equivalent willingness to pay for a lottery ticket or a bet on sporting events exceeds the expected returns from that gamble. In those situations, people are taking on risk that also leaves them with even lower income on average.

People who assess risks accurately and can afford to make more risk-neutral decisions will have higher incomes in the long run. In situations illustrated in Fig. 7.12, risk aversion can be driven by high priority needs for small increments of income. Interventions can help people take advantage of high return but potentially risky opportunities not only by helping them assess those risks accurately, but also by ensuring that basic needs are met so they can focus on average outcomes over the longer term.

Low-income people with high priority needs, like others throughout the income range, do not actually show consistent levels of risk aversion across distinct kinds of gambles. For example, many people show risk-loving behavior by buying lottery tickets on the same day that they show risk-averse behavior towards other kinds of risk. That kind of inconsistency could be due to the genuine usefulness of dreaming about winning the lottery but could also be caused by misjudging the odds of winning. Another kind of inconsistency arises when people show extreme risk aversion in some decisions, for example buying insurance whose cost of premiums far exceeds the expected utility of payouts such as extended warranties for small kitchen appliances.

Inconsistent attitudes to risk, whether extreme risk-aversion to some dangers or risk-loving willingness to gamble on some opportunities, leave people with lower total income and wealth over time. If people were better informed and felt more secure, they might regret those choices. These observations help explain the outcomes we see, including government regulations about gambling and insurance, because both kinds of products allow sellers to create a false impression about the likelihood and value of winning (in the case of gambling) and a false impression about the likelihood and value of payouts (in the case of low-value insurance).

The general case illustrated in Fig. 7.12 underlies the typical situation in which farmers and other producers show risk aversion, thereby missing opportunities for high-return activities. For people in or near poverty, where a bad outcome could lead to destitution from which they might never recover, risk aversion is a necessary and unavoidable consequence of their low incomes. If people in that situation tried the high-return activity, some might be lucky but on average the investment would lead to regret. There are many other situations in which producers are well advised to show a high degree of risk aversion, for example when bad outcomes would lead to bankruptcy and a permanent loss of the family farm or other enterprise.

In agriculture and other activities, the widespread need for risk aversion to protect lives and livelihoods often creates a risk-reward tradeoff, leaving higher return activities available for people with less risk aversion. Accurately perceiving each set of probabilities and payoffs is difficult, and some high-risk activities actually offer low rewards on average. People reach their highest available level of income in the long run when they perceive risks accurately and can afford to act in a risk-neutral manner. Escaping from poverty therefore requires not only having high return activities available, but also having sufficiently accurate information and sources of resilience such as social insurance and safety nets for low-income people to adopt those innovations. Interventions that provide high-return options, help people assess those options and ensure enough resilience for them to act in a risk-neutral manner have helped many millions of people move onto the growth trajectory illustrated at the start of this section in Fig. 7.11.

7.2.2.5 Consumer Prices and Food Crises

Risk and risk management is important not only for farmers but also consumers. Both groups face risk in their own production and income, and risk in market prices for what they buy and sell. The prices received by producers and those paid by consumers differ widely because of value added after harvest, which includes all kinds of food processing and packaging, handling, distribution and retailing at the point of sale. An important part of those value-added services is storage and transport designed to smooth availability over space and time.

Food availability for consumers at each location provides a greater diversity of items whose prices are more stable than what is produced at any one location. Those marketing services, which include the food manufacturing industry that transforms agricultural products into packaged and processed items, account for about 85% of the cost paid by consumers for food purchased at grocery stores in the U.S. About 15% of consumer spending is the farmgate cost of raw products purchased from farmers, about 30% is the cost of food processing and packaging, and the remaining 55% is the cost of distribution and retailing.

The data on cost shares for retail products in the U.S. come the USDA’s ‘food dollar’ calculations, which are based on the physical flow of goods and recorded transactions discussed in Chapter 9 and reported in Fig. 9.4. The FAO provides similar estimates for a few other countries. The fraction of consumer food spending that goes to farmers is somewhat larger in lower-income countries, and larger for some types of food, but even for raw products in almost all places the demand and supply of distribution and retailing to consumers leads to more spending on marketing services than for production on farms, or for transportation and storage from farms to consumers. In the U.S. and other high-income countries, retail food prices are mostly driven by the cost of processing, packaging, branding and retailing, including advertising which is estimated by the USDA to account for 2.6% of grocery costs.

In the U.S. and many other countries, processing and retailing services drive retail prices and determine the composition and healthiness of each item sold. Transportation and storage play a different role in the food system, allowing each community’s food consumption to be more diverse and stable than its food production. Transport and storage to smooth and diversify consumption turn out to be a low fraction of all food costs in the U.S. and other high-income countries but can be expensive in low-income settings.

The cost of transportation to consumers depends primarily on infrastructure and volumes shipped and can be extremely low when products are moved on large vehicles. Once products are loaded on a truck, train or boat, the amount of energy, equipment, personnel or other resources per mile for each unit transported is much lower on larger and slower vehicles like trains and ships that carry many times their own weight and do so more efficiently with less friction and fewer stops and starts than smaller vehicles.

For storage, the cost of stockholding to smooth prices over time is influenced by infrastructure, but also by the urgency with which people need money for other things. Once items are loaded inside a warehouse, the cost of stockholding involves some use of energy, equipment and personnel but varies mostly with the opportunity cost of keeping the products instead of selling them and using the funds for other things.

Low-income countries and places with poor infrastructure for transport and storage have greater consumer price variation, but the basic pattern of price dynamics is somewhat like the story observed in the U.S. as shown in Fig. 7.13.

Fig. 7.13
A multi-line graph plots the index value of January 1990 = 100 from 1990 to August 2023 for restaurant prices of food away from home, grocery prices of food at home, producer prices for processed foods, and producer prices for unprocessed foods. All the curves trend in a decreasing and increasing pattern.

Source: Reproduced from Federal Reserve Economic Data [FRED], using price indexes from the U.S. Bureau of Labor Statistics as the average for each category relative to the overall U.S. consumer price index for all goods and services. Updated versions at https://fred.stlouisfed.org/graph/?g=12MMl

U.S. price indexes for consumer and producer prices, January 1990–August 2023

The data shown in Fig. 7.13 are indexes set to 100 for the month of January 1990, to observe percentage changes since then for each category of food prices. Each index is a weighted average of representative items sold in each category, where item weights are proportional to sales. For example, if wheat accounts for 5% of all unprocessed food sold by farmers, its price changes would account for 5% of changes in the index. All four food price indexes are shown relative to the consumer price index for all goods and services, so the lines track the real value of each type of food in terms of all other things sold in the U.S.

The lowest line shows prices paid for unprocessed foods to farmers and traders. Figure 7.13 shows that the aggregate of all food sold by farmers has brief spikes and long valleys. The peak of those spikes occurred in August 1996, May 2004, July 2008, April 2014 and April 2022. Prices drop sharply after each peak, and then often trend downward for several years before hitting bottom, and sometimes staying low before beginning a gradual climb up to the next peak. Each individual item would have different price trajectories, but this general pattern reflects how each year’s supply-demand balance affects stockholding for raw materials. In years of declining prices when supply growth exceeds demand increases, storage bins for grain and other crops fill up. People respond with less investment in supply and more demand, so prices rise and stocks are used up.

The peak prices for farm commodities in the lowest line happen when stocks approach zero, just before buyers expect replenishment from the next harvest. Stockholding is not precisely measured, in part because much of it is ‘pipeline stocks’ held temporarily at each stage of the value chain, but the fact that participants in food markets hold a variable level of stocks in anticipation of future harvests plays a central role in food price risks. Food price crises occur when some buyers fear not being able to acquire enough of the materials they need to keep operating, so they are willing to pay very high prices until their pipeline stocks are replenished, and everyone else responds similarly leading to a runup in price to a peak just before the next harvest arrives.

The light-colored central line in Fig. 7.13 shows producer prices for processed foods, as sold by food manufacturers to grocery outlets and food service providers. Their price trajectory is like an attenuated echo of the prices of raw materials, with a lengthy period of declining real prices received by food manufacturers from 1990 to a low in mid-2006, after which prices rose to a peak in late 2014 before falling again to 2019 just before the pandemic. The onset of COVID drove a sudden wedge between prices paid to farmers that plummeted from January through April 2020 and prices paid to food manufacturers that shot up in April and May 2020, before recovery drove both up faster than general inflation to their peak in May 2022.

The dark, heavier line shows consumer prices for food at home, which have smaller fluctuations around general inflation, which would be a horizontal line on this chart. There are noticeable peaks in grocery prices soon after the peaks in farm and processed goods prices, and an almost 10% fall in real grocery prices from 2015 through 2019, but the overall average of cost items sold at grocery stores mostly tracks general inflation, unlike the top line showing prices for food away from home at restaurants and food service establishments.

The top line shows how prices for food away from home tracked grocery costs until the 2009–2014 period when they did not fall as grocery prices did, and especially the period since 2014 when restaurant prices kept rising as grocery prices fluctuated. The difference is that wages and rents play a larger role in restaurant and cafeteria costs than in groceries or general inflation. Like other price indexes, the data shown in Fig. 7.13 do not fully take account of changes in product quality within each category, and some of the rising average cost of restaurant meals since 2014 could potentially be attributable to higher average quality, in addition to higher real wages for workers and higher real rents and other costs paid by restaurant owners.

The food price crises and periodic spikes in costs of raw agricultural products are extremely important sources of risk for farm families and food market participants. For consumers buying retail products the resulting percentage price changes they experience are much smaller in magnitude as shown in Fig. 7.13, but still important for both the U.S. and a global average as shown in Fig. 7.14.

Fig. 7.14
A dual-line graph plots the average rise in real food prices over the previous 12 months from January 1998 to June 2023 for the United States and the global average up to 138 countries. The recent food price spikes are indicated at 2008 to 2021 of COVID onset, and 2022 to 2023 of COVID recovery.

Source: Authors’ chart of own calculations. U.S. data are calculated from the Bureau of Labor Statistics, and global data are from the IMF, averaging over all countries reporting monthly consumer price indexes [CPI] for food and for all goods and services, January 2000–December 2022. Each observation is the average monthly rise over the previous 12 months, times twelve for an annualized value. Number of countries rises from 51 in January 2000 to 95 in 2005 and then 138 from 2015 onwards. Raw data for all countries are at https://data.imf.org and an updated chart for the U.S. is at https://fred.stlouisfed.org/graph/?g=12Myr

Average rise in real food prices over the previous 12 months, January 1998–June 2023

The food price data in Fig. 7.14 are average food price inflation in real terms, relative to the overall consumer price index for all goods and services, over the previous 12 months starting in January 1998 for the U.S. and January 2000 for the global average. The global average has some change in composition as an increasing number of countries reported data over time, but the overall picture reveals some degree of synchronization in food price spikes around the world.

Periodic food price crises as shown in Fig. 7.14 represent entire years of sustained monthly rises in the real cost of food relative to all other goods and services, followed by sharp falls in the relative cost of food. These price crises are of enormous importance to consumers and political leaders, often attracting intense media attention.

When food prices spike up many households have great difficulty meeting basic needs. By Engel’s law we know that lower-income people spend a larger fraction of their total income on food. For example, a low-income household spending 50% of their available resources on food and facing 4% higher food prices would have 2% lower real income overall. By Bennett’s law we also know that lower-income people will already be reliant on the lowest cost sources of dietary energy before the food price rise, so they cannot switch to lower-cost foods. What we actually observe in these cases is cuts in spending on other things such as education and health care. Middle-income people spending 20% of income on food face a smaller cut in overall real income and have a choice between downgrading their diet quality to what lower-income people normally consume and cutting back on other things as lower-income people do.

For many consumers, food price crises are poverty crises. But high costs for consumers coincide with high prices paid to producers, and the lowest-income people in low-income countries are farmers who produce food. Most of those farmers sell some or most of their production every year to pay for other things, including foods that they buy because their own farms are suitable only for certain kinds of production. On balance, most such farmers benefit from periods of high prices and suffer during the long periods of low prices before the runup and brief peak in prices seen during food crises. For farmers with larger quantities available to sell, the brief periods of high prices are among the few high-income years they ever experience, while for other farmers their own production and sales simply offset the higher prices of purchased foods, insulating them from the crisis.

7.2.2.6 Hunger, Energy Balance and the Prevalence of Undernourishment

Price spikes and food crises are important, but access to sufficient food can be an everyday challenge even when prices are low. Throughout human history people have devoted enormous efforts to ensure that we all have enough food to power each day’s work and maintain our own health. Despite those efforts, many people experience hunger and food insecurity, and the economics of that problem begins with an understanding of energy balance over time.

When people eat less than their body needs, hunger drives us to seek more food in ways that nutritionists now know is caused by a variety of unconscious mechanisms. Those drivers include feeling hungry and related physiological responses such as fatigue and other symptoms that cause us to seek more food, in ways that are mediated by hormones and other physiological responses. Hunger also results in emotional responses and increases irritability and stress. Some of these mechanisms can be altered by appetite-suppressing medications such as semaglutide that mimics GLP-1 hormones, but for almost all people energy balance is achieved through conscious effort or unconscious regulation in ways that may be easier or harder to sustain from day to day.

The amount of food each person needs to maintain health will grow with body size starting in utero through childhood and adolescence, rising temporarily for pregnancy and breastfeeding, and vary with physical activity, recovery from injury and disease. Some people can meet these needs with ease, while many others must overcome great challenges to sustain intake in balance with energy expenditure. A schematic view of the mechanisms involved in maintaining energy balance is shown in Fig. 7.15.

Fig. 7.15
An illustration of the human body indicates the interaction of economic and psychosocial factors and biological and physiological processes, including hormones, neurotransmitters, and autonomous regulation. The economic and psychosocial factors include perception, cognition, and intentional efforts.

Source: Authors’ infographic, using human body outline sketch in the public domain from www.seekpng.com as image number u2q8r5w7t4a9i1i1

Interaction of conscious and unconscious mechanisms for energy balance

The sketch in Fig. 7.15 shows how economic and psychosocial factors interact with biological or physiological processes to determine dietary intake. These mechanisms and their interactions remain poorly understood, but evidence from around the world in very diverse settings clearly demonstrates the importance of autonomous processes underlying food consumption.

Through most of human history and continuing today, most people meet their energy needs with no knowledge at all about how much energy is in their food. The energy contained in food and its use for metabolism was not measured or even known to exist until the 1780s, when French chemist Antoine Lavoisier invented a device using melted ice to measure the heat present in food and released by animals who ate that food. Lavoisier called that heat ‘caloric’, after the Latin word calor (also Spanish, or chaleur in French). In the 1840s English physicist James Joule showed that calories of heat were linked to physical motion, showing the relationship between each kind of energy.

Researchers now use kilocalories (kcal) and kilojoules (kJ) interchangeably to measure the energy in each item, but consumers usually have no idea how many calories or joules they have eaten each day. Many countries require food manufacturers and restaurants to post that information for individual items, and a person’s energy intake can be estimated using a food diary or nutrition assessment. Although the scientific discovery and disclosure of energy in food is important for food policy, abundant evidence demonstrates that energy balance is not a conscious choice. Food choice plays a role in diet composition which influences a person’s future health, but total energy consumed over the course of a week or a month is driven by dietary practices in response to the biological and physiological processes as shown in Fig. 7.15.

The degree to which societal factors such as poverty and food scarcity prevent people from meeting their biological needs has been debated since antiquity. As soon as human energy requirements were first measured, they were found to be closely linked to body size and composition, and as soon as calories in food could be counted people began to compare the two. In 1961, an Indian statistician named P.V. Sukhatme devised a method to compare each country’s total food consumption to a standardized distribution of likely dietary intake relative to various body sizes for its population, and thereby track what the FAO still computes each year as the country’s Prevalence of Undernourishment (PoU).

The FAO began reporting their PoU estimate in 1974, at which time they calculated that 462 million of the world’s 4 billion people lived in countries where the distribution of intake was unlikely to meet their needs. FAO continues to report that number every year, finding for example that in 2022 a total of around 735 million of the world’s 8 billion people were undernourished in this sense. Observers sometimes interpret this as the number of hungry people in the world, but the estimate does not actually derive from comparing individual intake to individuals’ energy requirements. It is only a rough estimate of likely intake relative to what people would need if intake followed a standardized lognormal distribution, and if each person had a body size indicating a balance between calorie intake and expenditure, which are not actually the case. What the FAO’s undernourishment data show is each year’s change in a country’s total food consumption relative to its total population and demographic composition. That is an extremely useful number so the FAO continues to publish it, even as they adopt more granular measures such as their food insecurity scale introduced in 2014, and the cost and affordability of healthy diets indicator introduced in 2022, which we discuss in turn below.

7.2.2.7 Food Insecurity in the U.S. and Worldwide

In the early 1980s, a graduate student in nutrition at Cornell University named Kathy Radimer had recently returned from Peace Corps service in West Africa and found herself in the U.S. at a time when many people were struggling with an economic downturn caused by high unemployment. Community leaders and researchers had long spoken of widespread hunger in America, but clearly conditions in the U.S. were quite different from what Radimer had seen in Africa.

Radimer’s Peace Corps work had been in Burkina Faso and Cameroon, where more than half of the population lived on incomes below a dollar a day. Even the lowest-income people in America seemed wealthy in comparison. The U.S. clearly had an abundant diversity of food year-round, the FAO’s official PoU measure showed almost no undernourishment, and even low-income Americans did not show obvious signs of undernutrition. Despite the arguments of community leaders and researchers who worked closely with low-income peoples, the U.S. government at that time openly dismissed the idea that Americans were going hungry.

In part because of her varied experiences, Radimer approached the measurement of deprivation in a new way. Her dissertation, entitled Understanding hunger and developing indicators to assess it, did just that. Radimer conducted long, open-ended interviews with dozens of low-income caregivers about how they met their family’s food needs, and then experimented with many kinds of questions about food choice and meal preparation. Radimer’s research discovered that the clearest way to ask people about hunger was to ask a series of questions such as whether they had recently skipped meals, eaten less or different foods, eaten fewer foods, felt hungry and not eaten, run out of food, worried about whether there would be enough food, not eaten balanced meals, or similar experiences of food-related deprivation, with every such question framed as whether the respondent had that experienced that episode of deprivation because they couldn’t afford or didn’t have enough money to buy the foods they usually consumed.

The novelty in Radimer’s approach was to ask each question in the same terms that respondents had themselves used. Radimer learned that people with a wide range of dietary practices reported a similar set of responses to being unable to obtain their usual foods. She found that people remembered those experiences vividly even after several months, and that people facing more severe deprivation reported having done a larger number of different things. Most importantly, Radimer discovered that people said the reason they could not obtain their usual diet is that they had run out of money to buy food. Respondents said they ran out of money to buy their usual diet due to both loss of income and increased expenses, and almost always reported that they ran out of money for food because a sequence of shocks had depleted their savings.

Kathy Radimer’s dissertation was published in 1990, and the basic idea was quickly adopted by other researchers as a ten-item Radimer/Cornell Hunger and Food Insecurity Scale. By 1995 the USDA had adopted a version of her approach as an 18-item Household Food Security Survey, and in 2014 the FAO adopted a shorter version for global use as an 8-item Food Insecurity Experience Scale. Both ask generally similar questions about whether the respondent had experienced each kind of deprivation at any time in the past 12 months. The results have been of extraordinary value in helping governments and researchers measure deprivation in many different contexts, identifying when and why so many people around the world experience episodes of hunger and deprivation even when prices are low, and food is abundant for other people in their community.

The USDA and FAO versions differ slightly, in revealing ways. For example, the USDA survey asks one short question first to screen out respondents who say that over the past 12 months their household always had ‘enough of the kinds of food we want to eat’, then if needed continues with the remaining questions. Also, the USDA counts people as food insecure if they answer yes to three or more questions, whereas the FAO procedure gives each question different weights based on the probability that a yes on one of them predicts other yes responses. The FAO technique is designed around the idea that each question is a different aspect of the same underlying thing, so questions that predict other yes responses are strong indicators of that thing, whereas in the USDA method all questions have equal weight.

The measurement of food insecurity continues to evolve, in ways that provide important insights into the stresses and difficulties that caregivers experience when providing food to their families. Researchers are experimenting with more frequent surveys and shorter recall periods, asking different people in each household or asking similar questions in different ways, but Kathy Radimer’s discovery provided a feasible way to quantify an aspect of human wellbeing that had previously not been measured, revealing trends and disparities like those shown in Fig. 7.16.

Fig. 7.16
A multi-line graph plots the percent of households from 1995 to 2021 for black non-Hispanic, Hispanic, with children less than 6 years, total, with no children, and white non-Hispanic. The percent was high for black non-Hispanic and low for white non-Hispanic.

Source: Authors’ chart of data from USDA, Economic Research Service, based on the Current Population Survey supplement of Household Food Security Survey questions. Updates available at https://www.ers.usda.gov/topics/food-nutrition-assistance/food-security-in-the-u-s

Experience of food insecurity in the U.S., 1995–2021

Data in Fig. 7.6 show how a population’s responses to the food insecurity questionnaire are extremely revealing about trends and patterns of deprivation. Results for the U.S. begin in 1995 and changed little for more than a decade from 1995 until the sharp rise in 2008.

The sudden increase in food insecurity during 2008 reflected loss of jobs and lack of credit from banks that was in some ways like the conditions that had sparked Kathy Radimer’s original research in the early 1980s. Both periods saw a sharp rise in poverty rates as shown in Fig. 7.1. We will address these spikes in poverty and unemployment when we turn the macroeconomy in Chapter 9. Downturns in activity can originate anywhere in the economy and then spread to other sectors, with the 2008 caused by a wave of housing mortgage defaults, bank failures and inability to make new loans to all kinds of businesses, leading to high unemployment and low incomes across the U.S.

Just before the unemployment and credit crisis of 2008 there had been a worldwide spike in agricultural product and food prices peaking in 2007, as shown by U.S. producer prices in Fig. 7.13 and global consumer prices in the dotted line of Fig. 7.14. Consumer prices for food relative to all other things in the U.S. peaked in December 2008 and fell back sharply to a historic low in December 2009, while the wave of unemployment kept rising and the number of unemployed Americans did not peak until 2010 and poverty rates stayed high for several years as shown in Fig. 7.1. Despite a return to low food prices, food insecurity rates remained elevated and fell only gradually after the crisis, as it took several years for households to recover and accumulate sufficient savings to reliably obtain their usual diets and report experiencing no food insecurity over the previous 12 months.

A particularly important feature of the food insecurity measure is its use to identify disparities between groups, for which the data in Fig. 7.16 begin in 2001. The levels and changes in those disparities generally follow the patterns found by other measures of poverty and deprivation, with added detail relating to the challenge of meeting regular food expenditures for households with preschool children. As shown in Fig. 7.16, food insecurity among households with children under six years of age rose above 20% for several years after the 2008 crisis and then fell sharply to below 15% just before the pandemic. The gap between households with preschoolers and households without any children fell from a difference of more than 10% to under 5% in 2018. The gap widened again in 2020 with the onset of COVID and was cut to a historically small gap in 2021 which was the year of the U.S. child tax credit shown in the previous section’s Fig. 7.3.

7.2.2.8 Food Access and Affordability of Healthful Diets

The introduction of food insecurity measurement in 1995 occurred during a period of relatively low and stable U.S. food prices shown in Fig. 7.13, more than a decade before the food price spike and the high rates of food insecurity observed for several years thereafter in Fig. 7.16. From the 1990s until the late 2010s there was an increasing abundance of agricultural products globally, especially cereal grains, vegetable oil and other low-cost sources of dietary energy, and steady declines in the share of people experiencing extreme poverty worldwide as shown in Fig. 7.7.

During the period of relative food abundance from the 1990s to the late 2010s, the focus of food policy shifted from quantity to quality, with increasing evidence about how a person’s usual diet influences their future health. The U.S. National Health and Nutrition Examination Survey (NHANES) and other data sources worldwide revealed increases in the prevalence of overweight and obesity as well as a growing burden of diabetes, hypertension and other diseases, all of which were closely correlated with changes in the composition of foods available and their share of food consumption. The lowest-income countries were also seeing increases in total food consumption, with increases in children’s heights as well as weight throughout the life course. All countries continued to experience undernutrition in some dimensions such as iron-deficiency anemia, and those were increasingly understood in terms of dietary patterns and the types of foods consumed, affecting the balance among food groups and displacement of more healthful foods with less healthful foods when meeting daily energy needs.

The worldwide shift in attention from food quantity to diet quality that began in the mid-1990s took many forms, and coincided with improvements in data availability and research on the types of food being produced and consumed in the U.S. and globally. For food economists, an important consequence of this nutrition research has been to show the difference between foods that would be chosen if consumers wanted only to improve their future health, in contrast to foods chosen based on revealed preferences and effective demand. The gap between foods for health and foods actually chosen could be due to the fact that consumers cannot know and may be misled about the impact of each item on their future health, and even if consumers did know the true healthiness of each food, they would have many other priorities beyond health such as taste, convenience and aspirations.

New evidence on diet-health relationships since the 1990s has allowed food economists to measure access and affordability of high-quality diets, thereby indicating whether consumption of certain foods is due to being at a place and time without access to higher-quality options (measured by unavailability or high prices for more healthful foods), or unaffordability of those options (measured by diet costs relative to household income), or displacement of more healthful foods by less healthful foods (despite the affordability of more healthful options). Ability to measure food access and affordability of high-quality, supportive diets result from a set of simultaneous shifts in the U.S. and worldwide.

One shift occurred in the U.S., as nutrition researchers increasingly emphasized balance among food groups for example in the official national Dietary Guidelines for Americans (DGAs). The U.S. government first produced its DGAs in 1980, based in part on evidence from the first round of NHANES data collected in the early 1970s when the most important concerns involved deficiencies in several vitamins and minerals. Government funding for the DGAs specified a revision every five years, and by the late 1980s there had been such large increases in consumption of animal fats and vegetable oil which was strongly correlated with increased cardiovascular disease that the 1990 edition called for limiting all kinds of fats and oils.

The 1990 edition of the U.S. DGAs introduced the idea that balance among food groups could be illustrated using a ‘food pyramid’ with basic starchy staples at the bottom, showing the relative importance of different food categories. That visual food guide was soon found to be unhelpful as evidence emerged that rapid increase in U.S. consumption of refined flour and added sugar from the late 1980s through the 1990s was linked to high rates of diabetes and obesity. Based on new data from the 1990s, the 2000 edition of the DGAs introduced a recommended level of vegetable and fruit consumption, the 2005 edition shifted the pyramid to reduce the visibility of starchy staples, and the 2010 edition switched visual metaphors to shares of a meal with a fork, a dish and a glass of milk known as MyPlate. Each generation of American children grew up with these pyramids and then the MyPlate guidance on school walls, in pamphlets and online, and the DGAs also influenced the composition of meals at school and other government facilities.

The 1990s shift in focus to diet quality defined in terms of food groups occurred globally, not just in the U.S., as other countries introduced their own dietary guidelines in response to the growing gap between actual consumption and evidence about which foods would best improve consumers’ future health. One significant step occurred in November 1996, when the United Nations brought government leaders to a World Food Summit at the FAO headquarters in Rome. The official summit declaration, signed by representatives of 186 countries, defined food security as ‘when all people, at all times, have physical and economic access to sufficient, safe and nutritious food to meet their dietary needs and food preferences for an active and healthy life’. This phrasing had evolved from earlier government declarations, extending the goals of government intervention from simply having enough food in each country each year to year-round access to healthful diets.

The rising importance of diet quality in policy documents was accompanied by an explosion of new data about the nutritional composition of foods purchased and consumed, due in part to the U.S. Nutrition Labeling and Education Act of 1990 and similar legislation adopted elsewhere. Implementation of that law, which was based on concerns from the 1970s and 1980s about vitamins, minerals, fats and other specific nutrients, led to the nutrition facts panel on packaged foods, and USDA publication of those data for all foods consumed in the U.S. as recorded in the U.S. flagship National Health and Nutrition Examination Survey (NHANES). Other countries made similar investments in food composition data and dietary recall surveys, leading to stronger evidence about diet-disease relationships.

As incomes rose and people shifted towards more packaged and processed foods, fortification and supplementation programs came to fill gaps in requirements for individual vitamins and minerals. But packaged and processed foods are highly palatable and easy to consume, especially for people looking to save time on food preparation, so intakes of refined carbohydrates and added sugar, animal fats and vegetable oil, added salt and other ingredients often increased to harmful levels. Those excesses, driven in part by increased use of food away from home, displaced more healthful foods needed for balanced diets, especially vegetables and fruits, animal source foods like fish or eggs and dairy as well as meat, and sources of plant protein such as legumes, nuts and seeds. All these nutrient-rich food groups are more expensive than the starchy staples especially refined grains, vegetable oil and sugar, per unit of dietary energy, due to greater difficulty of production and distribution.

In high-income countries, increasing awareness of the difference between a high-quality diet and what people were consuming led to focus on access to more healthful items as a possible cause of disparities in diet quality and health. For example, in 1995, a Department of Health report from the government of Scotland described low-income urban neighborhoods as food deserts, referring to the relative lack of larger grocery outlets selling a variety of fruits, vegetables and other foods increasingly known to be protective against diet-related diseases. That term became widely used in the late 1990s and early 2000s, fueling an explosion of research using newly available geocoded data and mapping tools to describe the distances that households would have to travel to reach larger markets with a greater variety of healthful offerings.

Many ways of measuring food deserts and access to healthful items were tried during the 1990s and 2000s. The U.S. Congress directed USDA to conduct an official study of food deserts in 2008, leading even more research and development in the 2010s of rich geocoded data on each location’s food environment, typically defined in terms of the type and number of retail outlets at each place. Those data included a pioneering U.S. National Household Food Acquisition and Purchase Survey (FoodAPS) implemented in 2012–13 asking individuals where they had obtained each type of food they consumed, and increasing use of ‘scanner’ data showing the exact price and item purchased from specific transactions. Almost all scanner data are initially proprietary, used by retailers and manufacturers for internal decision-making, but in the 2010s the USDA and others increasingly purchased these data for public-sector use in policy analysis.

During the 2000s, new data about the nutritional attributes of foods allowed health scientists, initially led by Nicole Darmon in France and Adam Drewnowski in the U.S., to begin matching purchased items to their sales price. They found that foods with the lowest cost per calorie tend to have the most calories per unit of weight or volume, and the highest ratio of calories to the full set of nutrients needed for health. The ingredients providing the energy in these low-cost, calorie-dense foods tend to be the least expensive agricultural products per calorie, which are not only starchy staples, but also vegetable oil and sugar. Food processing often uses those raw ingredients in combination with other foods, transformed in ways that often remove moisture and fiber which raises calories per gram of solid foods, and adds sugar to beverages leading to high calories per liter.

The health scientists’ findings of high calorie content in low-cost foods, especially highly processed and packaged food, led to the idea that ‘food deserts’ with few healthful options were more accurately seen as ‘food swamps’ where the lowest cost options meet energy needs without attributes required for health. These same patterns led to the observation by health scientists that ultraprocessed foods (items with the most processing, including added ingredients as well as removal of naturally occurring food attributes) were particularly harmful to health. That view arose not only because these items contained inexpensive refined flour, oil and sugar that delivered palatable calories without other needed nutritional attributes, but also because their other ingredients, processing and packaging as well as advertising and marketing efforts had made those products tastier and more attractive than other foods.

By the 2010s, health scientists increasingly found that highly processed foods and meals away from home were contributing to diet-related diseases by displacing foods with attributes needed for future health such as vegetables and fruits, animal source foods like fish, eggs, dairy, and meat, and sources of plant protein such as legumes, nuts and seeds. Those food groups were clearly more expensive ways of meeting daily energy needs than plain carbohydrates and vegetable oil. Health scientists also found that the nutrient dense food groups consumed by higher-income people worldwide were primarily meat and some types of fish or seafood that delivered only certain nutrients and not others. The gap between effective demand at higher incomes and foods needed for health was increasingly seen to consist of high consumption of highly processed foods, meals away from home, and meat or other foods that displace the mix of vegetables, fruits, legumes, nuts and seeds, and fish or eggs or dairy that is associated with long-term health and communicated in dietary guidelines.

To measure access and affordability of healthful diets using the toolkit of economics, from the mid-2010s a series of projects began assembling retail prices, matching items to their food composition and automating the selection of the lowest cost items that would meet health needs. Using the least expensive items for health isolates the cost of healthiness from the cost of other attributes such as taste, convenience and aspirations, distinguishing the cost and affordability of healthful diets from other drivers of food choice. This method was adopted in 2022 by the FAO and the World Bank as a new metric of food access, producing the cost data shown in Fig. 7.17.

Fig. 7.17
A scatterplot of cost per day at P P P prices in 2017 versus G N I per capita at P P P prices in 2017 dollars on a log scale for the cost of the least expensive foods for a healthy diet and actual average food spending. The actual average food spending tends towards a positive correlation.

Source: Authors’ chart of diet cost data from FAO, World Bank and the Food Prices for Nutrition project, using item prices reported by national statistical organizations through the International Comparison Program [ICP] downloaded from https://databank.worldbank.org/source/food-prices-for-nutrition. Food expenditures are derived from those data, and national income [GNI] is from the World Development Indicators https://databank.worldbank.org/source/world-development-indicators

Cost of the least expensive foods for a healthy diet and actual food spending in 2017

The data shown in Fig. 7.17 contrast the cost of the least expensive items for health with national average food expenditures, per person per day, in countries at each level of national income. A first discovery is that the lowest-cost locally available foods, when added up in proportions needed for health, are not less expensive in low-income countries. To meet the daily needs of a representative adult they would cost in the range of two to four dollars per day in purchasing power parity terms. This is surprising because travelers from high- to low-income countries typically find food to be inexpensive, but those impressions come from converting currencies at market exchange rates. In terms of the local population’s purchasing power, costs are similar across countries for the same reason that grocery prices follow general inflation over time in the U.S., which is that the cost structure of retail food items includes a mix of labor and facilities, energy and other resources that is broadly aligned with costs for all goods and services.

The cost of sufficient foods for a healthful diet does not differ by income level, but actual spending per day on food does rise with income as shown in Fig. 7.17. This result follows Engel’s law and Bennett’s law, as people in higher-income countries have average spending on food that is more than twice the cost of the least expensive items for health because people have money to spend and choose foods for reasons other than health such as taste and aspirations, convenience and sociability. In lower-income countries, however, on average people spend about half as much as the cost of a healthful diet, because they lack the income needed to acquire sufficient quantities of more expensive food groups such as vegetables, fruits and animal source foods. These disparities between national averages reflect similar disparities within countries and drive a big gap in affordability that differs from the older measure of food insecurity as shown in Fig. 7.18.

Fig. 7.18
A scatterplot of percent of the population in 2017 versus G N I per capita at P P P prices in 2017 dollars on a log scale for the proportion of the population who report experiencing food insecurity in the previous year and the proportion of the population who cannot afford a healthy diet.

Source: Authors’ chart of data showing the prevalence of moderate or severe food insecurity in the previous year based on the FAO’s Food Insecurity Experience Scale [FIES], and unaffordability of healthy diets from FAO, World Bank and the Food Prices for Nutrition project, using item prices reported by national statistical organizations through the International Comparison Program [ICP] and each country’s income distribution estimated from household surveys by the World Bank. Unaffordability is defined as the fraction of people whose income available for food is below their country’s cost of a healthy diet, based on World Bank estimates of income distribution and allowing 52% of income to be spent on food, from https://databank.worldbank.org/source/food-prices-for-nutrition. Experience of food insecurity and national income [GNI] is from https://databank.worldbank.org/source/world-development-indicators

Unaffordability of healthy diets and prevalence of food insecurity in 2017

The unaffordability data in Fig. 7.18 are designed to provide the most useful available estimate of how many people globally do not have enough available income to obtain a least-cost healthful diet in their country. For this measure, available resources are defined as just over half (52%) of each person’s income, while each country’s income distribution is estimated by the World Bank using the same set of household surveys from the poverty data in Figures 7.6 and 7.7. Combining those income data with diet costs shown in Fig. 7.17 reveals that over 90% of the population in the lowest-income countries but fewer than 5% of people in high-income countries cannot afford a high-quality diet. This result is partly due to fact that diet costs are not lower for low-income people, and partly due to the definition of affordability used by the FAO and the World Bank for this way of measuring food access.

The threshold of affordability for the purpose of global monitoring was defined by the FAO and the World Bank as the average fraction of total household expenditure that is spent on food in low-income countries, which happened to be 52% in 2017. This definition of affordability was proposed and retained by the FAO and the World Bank as the most useful of the available options, first because that definition sets the threshold of income needed for nonfood expenditure at the average observed in the low-income reference population that is most relevant to global food security, and second because that threshold is computed from the same data as diet costs and would be updated at the same time for monitoring change in the future.

The procedure used for calculating the unaffordability of healthy diets shown in Fig. 7.18 is closely related to the methods used for calculating poverty rates and deprivation in general but adapted to the needs of monitoring global access to sufficient quantities of the lowest cost local items in each food group. For example, the U.S. poverty line was originally computed by Mollie Orshansky in 1963–64 as three times the cost per day of the USDA’s low-cost food plan. That diet plan included a wider range of more expensive foods than the least-cost healthful diets used for global monitoring today, and the income share for food was based on the U.S. national average which was 33% in 1955, lower than the share in low-income countries which was 52% in 2017.

The FAO and the World Bank introduced the unaffordability metric for global monitoring in 2022, with locally adapted versions rolled out for use within countries at the same time. These methods capture access to foods that would just meet health needs. For use in measuring deprivation more generally, costs would be higher to reflect food preferences and time use in meal preparation, and income shares available for food would be lower to reflect nonfood needs above actual average spending in low-income countries. The primary purpose of capturing food access using affordability of least-cost items is to distinguish among three possible causes of unbalanced diets: (1) in some places, even the most affordable items in the more expensive food groups such as vegetables or fruits have unusually high prices, and could be made more accessible by reducing costs to international standards through improved production and distribution; (2) for some households at each place, available incomes could be below the cost of a healthy diet, so affordability would require higher incomes or safety nets; or (3) some populations might have access and be able to afford a healthy diet, and yet consume other foods instead for a variety of reasons such as meal preparation costs, tastes and aspirations.

The food insecurity data in Fig. 7.18 are the FAO’s global counterpart to the U.S. data in Fig. 7.16, based on an eight-question FIES scale of whether the person skipped meals, ate less or differently than usual, went hungry or had other similar experiences for lack of money to buy food. In low-income countries, the fraction of people with food insecurity is much smaller than those who cannot afford a healthful diet because the FIES questions refer to a person’s usual diet which is much less expensive because it contains much more starchy staples than a healthful diet. In high-income countries, many more people report being food insecure than cannot afford a healthful diet, as their usual foods are much more expensive than the very basic items included in the least-cost healthful diet.

Comparing the two kinds of data reveals how food insecurity prevalence, which refers to people having run out of money to buy their usual diets, successfully captures the financial vulnerability of people with low savings. But it does not capture nutrition security, which would require access to high-quality diet items that the world’s lowest-income people cannot afford, and that higher-income people might not want to use because they are too time consuming to prepare and not sufficiently preferred for other reasons. The actual items included in these least-cost diets are foods that could be eaten and would be healthful, but they are not the most delicious or attractive meal options. Food access measurement can guide agricultural production and distribution to make low-cost options available and can guide social assistance and safety nets to ensure affordability of those options, but actual food choice depends on other aspects of deprivation as well as revealed by experiences of food insecurity and by poverty measurement discussed in this chapter.

7.2.3 Conclusion

The measurement methods discussed in this chapter extend the economics toolkit to deprivation over time and among people worldwide. An important aspect of these metrics is to look beyond effective demand and consumer surplus to the foods and other things that people are not buying due to lack of purchasing power, both episodically due to running out of money as in experiences of food insecurity and chronically due to high cost and low average incomes as in unaffordability of high-quality, supportive diets.

Economic analysis of deprivation reveals a close relationship between risk and poverty, and close links between risk management and poverty alleviation. One reason is that poverty itself may be transient, so that reducing risk limits the number of people who ever experience poverty. Another reason is that risk aversion in consumption and other observations imply diminishing marginal utility of additional income, as people devote their initial spending to their highest priority needs. To the extent that people know that about themselves, they can understand it to be true of others as well, leading to the social insurance and mutual aid we observe.

The measurement and analysis of poverty, risk,and the relationship between them helps explain how and why people engage in collective action to pool resources, for example using premiums paid for insurance. Pooling to manage risks and limit deprivation is done through private enterprises such as insurance companies, through the voluntary nonprofit sector such as community food pantries and mutual aid groups, and through national governments such as the USDA and international organizations such as the World Food Program (WFP).

Each population’s efforts to smooth risk and protect against poverty often focus on food. One domain of intervention is in agriculture, where high variability in both production and prices makes it important to smooth risks for farmers, helping them gain resilience and achieve income growth. Market failures limit the role of private insurance in protecting farmers against risk, driving a shift towards other kinds of assistance. Another domain is for consumers, to smooth and support wellbeing by addressing how high food prices and low incomes cause deprivation. Meeting daily food requirements is a universal human need that occupies a large fraction of resources for low-income people, leading many societies to focus on ensuring that all people can always access sufficient food for an active and healthy life.

In recent decades, health scientists have identified differences between the foods that would be used if people sought only to improve their long-term health, and the foods that are actually demanded and supplied as income rise. In higher-income settings that distinction creates a difference between the use of food assistance to absorb risks and alleviate poverty generally, and the use of food assistance to reach nutrition and health goals. The following chapter addresses that difference, as for example in the question of whether assistance is provided in kind, as in the U.S. WIC program that gives people fixed quantities of specific foods, or provided using more cash-like transfers, as in the U.S. SNAP benefits that can be used to pay for all kinds of food at local grocery stores.

This chapter’s exploration of the economics toolkit to address poverty and risk reveals how designing successful risk management and social assistance programs is a work in progress. People have strong motivations to overcome deprivation in our own lives and for others, but doing so requires overcoming a variety of market failures, policy failures and practical obstacles. As shown in this chapter, the risk management and social assistance toolkit has allowed sharp reductions in many kinds of extreme deprivation and disparities between groups, with very large remaining needs to be addressed in the future.