1 Introduction

For decades, the United States (US) National Aeronautics and Space Administration (NASA) has worked in partnership with agencies such as the US Department of Agriculture (USDA), the US Agency for International Development (USAID), and the National Oceanographic and Atmospheric Administration (NOAA), as well as international organizations and private industry to support and advance the use of remotely sensed data for more informed decision-making and societal benefit. A changing profile of extreme weather hazards and societal exposure increasingly require the large-scale view afforded by a fleet of satellites observing Earth as a system, particularly when set against a backdrop of challenges that include a growing world population, rapid socioeconomic development, and the need to sustainably manage finite natural resources. For example, the United Nations estimates that the world’s population will increase by 2.2 billion by 2050, with most of that growth occurring in tropical and subtropical areas, especially Africa.

Despite substantial progress over the last few decades, world hunger has been rising since 2014, and the combined threats of conflict, population growth, limited arable land, and climate variability and change will exacerbate this situation [43]. For example, seafood is an important source of protein for a significant number of people. Wild catches cannot match increasing demand and, in fact, their sustainability is in question. Therefore, aquaculture is an ever more important complement to agriculture to feed the human population. At the same time, however, the increased use of fertilizer for agriculture has led to increased runoff of nitrogen and phosphorus causing the eutrophication of water bodies, threatening aquatic ecosystems. Aside from production, lack of access to nutritious food choices or clean water and sanitation can exacerbate food insecurity and lead to malnutrition. Clearly, monitoring food production and distribution systems, in addition to water quantity and quality in support of food security, requires a global perspective.

Earth observing satellites provide the unique ability to simultaneously monitor these and other interrelated systems. Advances in our ability to measure multiple variables, combined with integrative models that help us understand the connections between these systems, provide a unique opportunity to support food security assessments. For example, ongoing international efforts on crop monitoring by the Group on Earth Observations Global Agricultural Monitoring (GEOGLAM) and others integrate remote sensing data into global and regional crop production projections, as detailed elsewhere (i.e., [22, 76]). Toward the objective of highlighting satellite data products that may be applied in support of smarter agriculture or aquaculture, we review the current status of several remotely sensed observables: variables related to vegetation, land degradation, water quantity, water quality, and air quality, as well as data assimilation and modeling efforts that combine observations with hydrodynamic, geophysical, and sometimes socioeconomic models to yield a more complete picture. This review is intended to inform the larger science community, resource managers, and policymakers from those unfamiliar with satellite data to those already using some but perhaps not the full suite of the observables presented here.

2 Observable: Primary Production

Estimates of gross primary production (GPP) provide valuable information on the spatial distribution and temporal variability of primary production, which in an agricultural setting, determines crop yields and fodder production for animals. Agricultural food security requires measured or modeled agricultural GPP to determine important crop and fodder production for areas of interest. Observations and models are both used to support food security solutions.

The normalized difference vegetation index (NDVI) is the ratio of the difference in surface reflectances measured in the red and near-infrared spectral bands and their sum [122]. NDVI distinguishes vegetated areas from other surface types (Figs. 1 and 2), but is not necessarily linked to GPP.

Fig. 1
figure 1

This NDVI time series produced on August 30, 2018 compares 2017 and 2018 wheat growing in three Canadian provinces [127]. The historical record of MODIS data since 2000 enables quantitative agricultural food and fodder production estimates using minimum value, maximum value, and the historical mean value by time period, calculated in very close to real time

Fig. 2
figure 2

Landsat 8 scenes showing field-scale RBG and changes in NDVI over the 2017 growing season, advancing global food security to the field level. Source: Landsat-8 Project Office, NASA Goddard Space Flight Center

There are several techniques that use satellite observations to determine primary production, and here we describe just a few. For example, one technique involves extrapolating net carbon exchange from eddy-covariance flux tower observations using satellite measured absorbed photosynthetically active radiation at the 1 km scale [23, 62]; a second technique uses MODIS satellite observations in conjunction with a light-use efficiency model to produce GPP estimates at the 1 km scale [140]; a third technique uses satellite observations directly to determine both GPP and agricultural production at the native 250 m resolution of MODIS using spectral vegetation indices through the growing season [127] (Fig. 1); and a fourth technique uses chlorophyll fluorescence from the GOME-2 satellite to estimate agricultural production in combination with optical, thermal, and microwave satellite data [53].

An advantage of these four approaches is that the satellite observations also provide realistic surface conditions of vegetation photosynthetic capacity, phenology, disturbances, recovery, and human management. A limitation of the chlorophyll fluorescence approach is the spatial resolution of these data is < 1 km, while MODIS is now producing 250 m spectral vegetation indices data, and sustained land imaging is now producing 30 m spectral vegetation indices data [68].

Agricultural production estimates must be restricted to crop-specific areas to avoid confusion from other crops, natural vegetation, and areas of no vegetation. This translates into being able to follow specific crops through time with continued observations (Fig. 2). This capability is available from space, now at greater accuracy and lower latency, with sustained land imaging and multi-spectral 30 m data from Landsat-8, Sentinel-2a and Sentinel-2b. The Harmonized Landsat and Sentinel-2 (HLS) project is now producing 30 m time series multi-spectral observations with an equatorial revisit frequency of 3.7 days at the equator [68]. Landsat-9 is planned for launch at the end of 2020 to join the sustained land imaging instrument suite, at which point the equatorial revisit frequency will drop to 3 days. It is highly likely that a combination of chlorophyll fluorescence and 250 and 30 m multi-spectral satellite data will be developed in the near future to predict global agricultural crop and fodder production.

3 Observable: Land Degradation

Land degradation has been highlighted as a key development challenge by numerous international bodies, including the United Nations Convention to Combat Desertification, the Convention on Biological Diversity, the United Nations Framework Convention on Climate Change, and the Sustainable Development Goals. These conventions seek to avoid, reduce, and reverse land degradation, especially desertification and deforestation, by supporting better practices. Sustainable land management seeks to maintain vegetative cover, build soil organic matter, make efficient use of inputs, such as water, nutrients, and pesticides, and minimize off-site impacts [25].

Three indicators have been identified as metrics for quantifying land degradation that are also geophysical variables measured by Earth-orbiting satellites: land cover, carbon stocks, and land productivity or gross primary production. A review of these three land degradation indicators led Tucker and Pinzon [123] to focus on land productivity or gross primary production during a pilot study in four countries: Senegal, Uganda, Kenya, and Tanzania. NDVI from several satellite data sources at spatial scales ranging from 30 m to 8 km was evaluated and found to be well-suited for identifying degrading areas.

Time integrals of spectral vegetation indices were compared to time integrals of GOME-2 chlorophyll fluorescence from Joiner et al. [60] and found to be linearly and very highly correlated for 22 test areas. This confirmed the validity of using NDVI as a direct measurement of gross primary productivity. Growing season integrals of NDVI were regressed against growing season integrals of soil moisture over the AVHRR, SeaWiFS, and MODIS records for Kenya, Senegal, Tanzania, and Uganda. Consistent negative residuals were identified as areas of land degradation following the method of Ibrahim et al. [57]. Aggregations of pixels with negative residuals were studied with Landsat 30 m and 50 cm commercial satellite data for all four countries, to confirm or refute the occurrence of land degradation and to identify its cause [123].

4 Observable: Precipitation

The rain and snow that fall on the Earth’s surface provide the water upon which agriculture depends, whether directly or in replenishing stores as snowpack, lakes, reservoirs, and ground water that are later used. The occurrence of precipitation is governed at large scales by atmospheric constraints on moisture convergence and vertical motions, but how it actually gets released at the small scales displays a great deal of variability right down to the microphysical processes that govern conversions among vapor, liquid, and ice phases. Because precipitation events are so strongly driven by these small-scale processes, and the fact that much of the time there is no precipitation, the resulting statistics are far from Gaussian, highly skewed, and multi-scaled, rendering the analysis of precipitation challenging (Fig. 3).

Fig. 3
figure 3

Global long-term average precipitation patterns. Source: Global Precipitation Climatology Project

A relatively long history of precipitation data is available from surface gauges, which provide point measurements as a function of time. Because they provide actual measurements of precipitation, gauges are considered the standard, even with significant limitations. This includes the lack of correlation with surrounding areas on short time scales, which makes point-to-area analyses challenging. The typical under-reporting of amounts is due to both wind effects reducing the effectiveness to capture precipitation and the inability of some gauge technologies to correctly record snowfall. The problem of representativeness is exacerbated by a lack of sufficiently dense gauge networks over most of the globe.

A second approach to developing precipitation records is to use surface-based radar measurements from which the precipitation amount must be estimated. These estimates can be locally useful in the U.S. and western Europe, but systematic coverage elsewhere is lacking.

The third approach to obtaining global precipitation information is to use satellite sensors. Over the past couple of decades, this has become the dominant approach for many applications due to the quasi-global coverage by satellites, an acceptably fine time/space scale of results, and relatively short latencies. One important advantage for satellites is that they typically provide precipitation estimates over both land and ocean, versus the land siting for most gauges and land/coastal coverage for radars. At present, passive microwave sensors flying on a virtual (because uncoordinated) constellation of low-Earth orbit satellites provide observations every 3 h or less about 90% of the time, with footprint sizes on the order of 10–20 km. The resulting data are processed into precipitation estimates for the individual sensors and then combined into multi-satellite products that are typically useful for agricultural applications. Estimates of precipitation that use infrared sensor data from geosynchronous orbit satellites are considered less accurate than microwave-based data. However, they are typically available for the entire latitude belt 60° N-S every half hour, so they can be used in combination with the microwave or as stand-alone products. Some products are created within about 4 h after observation time, but longer latencies of 12–24 h in other products are usually satisfactory and allow more complete estimates to be assembled.

Many satellite-based algorithms have been developed over the years and a number are routinely used to create publicly available datasets. The International Precipitation Working Group (IPWG) maintains a listing of freely available, quasi-global, long-term datasets at http://www.isac.cnr.it/~ipwg/data/datasets.html. For most users, the multi-satellite datasets with and without explicit use of surface gauge data are the most relevant. It is somewhat challenging for new users to determine the fitness of use for the various datasets for their particular application; see “How Do I Choose a Data Set?” for pointers. In general, data will be more accurate when time/space-averaged; are most representative of typical behavior, as opposed to extremes; and show reduced skill in mountainous regions and cold seasons.

Taking the NASA Global Precipitation Measurement (GPM) project’s Integrated Multi-satellitE Retrievals for GPM (IMERG) datasets as examples, there are three latencies available: 4 h, 12 h, and 3.5 months (Early, Late, and Final, respectively), each on a 0.1° × 0.1° latitude/longitude grid every half hour. Longer latencies use more data and should therefore be more accurate. The page https://pmm.nasa.gov/data-access/downloads/gpm provides several format options and hot links to documentation. Currently, Version 05 covers the period March 2014 to the present, but Version 06 (planned for early 2019) will extend back to June 2000. All three IMERG products are provided for the entire period of record so that products such as crop yield models can be assured of a relatively homogeneous data record for developing calibrations.

5 Observable: Terrestrial Water Storage

Food cannot be grown on land without freshwater, so monitoring and understanding how freshwater storage is distributed across the land and how it changes over time is essential to assessing food security. A portion of the water that precipitates onto the land surface is stored as surface water, snow, ice, soil moisture, or groundwater. The sum of these is known as terrestrial water storage (TWS). The importance of TWS is obvious, but it is difficult to monitor at regional to global scales using ground-based networks because installation of automated observing systems for all of the components is expensive and labor intensive, and because most countries do not share the data that they do collect [42].

The NASA/German Gravity Recovery and Climate Experiment (GRACE) mission and its successor, the GRACE Follow On Mission, measure temporal changes in Earth’s gravity field that can be interpreted to determine variations in TWS [119]. The GRACE and GRACE Follow On based TWS data have significantly lower spatial (~ 150,000 km2 at mid-latitudes) and temporal (~monthly) resolutions than other Earth observing satellite measurements, and they provide only the departures from the period-mean TWS state (known as TWS anomalies) as opposed to estimates of the total amount of water stored in each TWS component. Nevertheless, because satellite gravimetry is the only remote sensing technology able to detect changes in the storage of water below the first few centimeters of the soil column, including groundwater, GRACE proved to be enormously valuable for hydrological science and related applications. GRACE launched in 2002 and delivered 15 years of TWS data before the mission ended in 2017. GRACE Follow On, which launched on 22 May 2018, is expected to extend the TWS data record for at least another 5 years.

Among many scientific discoveries enabled by GRACE, it was used to quantify groundwater depletion in several major food producing regions around the world. In particular, Rodell et al. [93] and Tiwari et al. [120] documented shocking rates of groundwater decline in northern India caused primarily by extensive and intense agricultural irrigation supported by aquifers where groundwater recharge cannot keep up with extractions. Considering that hundreds of millions of people live there and depend on these crops, the situation is dire. Subsequent studies applied GRACE data to quantify groundwater losses associated with irrigated agriculture in California’s Central Valley [41], the Middle East [133], Saudi Arabia [118], the North China Plain [44], and other regions. Richey et al. [92] and Rodell et al. [94] provide global overviews, and the latter also discusses the combined effects of natural interannual variability, climate change, and human water management and consumption on TWS.

To overcome the challenges of low spatial and temporal resolution and data latency (which was typically 2–5 months with GRACE but is expected to be significantly reduced with GRACE Follow On), Zaitchik et al. [139] introduced a data assimilation approach for integrating data from GRACE and other, timelier and higher resolution observations in order to produce fields of groundwater, soil moisture, and snow water equivalent in near-real-time. Since 2011, variants of that approach have been applied to deliver wetness/drought indicator fields for the contiguous U.S. (Fig. 4) that are disseminated by the National Drought Mitigation Center, used by farmers, ranchers, other agricultural interests, public agencies, and private consultants, among others [55]. Global, GRACE data assimilation-based wetness/drought indicators have recently been developed as well, which will help to satisfy the need for timely freshwater availability data worldwide [67].

Fig. 4
figure 4

Groundwater wetness/drought indicator (wetness percentile relative to all Augusts during the period 1948–present) based on the assimilation of GRACE data into a land surface model for August 15, 2011. Note the severe drought encompassing most of New Mexico, Texas, Louisiana, and the Southeast

6 Observable: Snow Water Equivalent

Worldwide more than 1.2 billion people rely on seasonal water runoff coming from snow pack and glaciers [18]. The Indus Basin in Asia is the largest irrigation system in the world; its snow melt is essential for the rice production in the basin and estimated to have contributed about 13 km3 to agricultural irrigation in 2008 (~ 1/3 of Lake Mead) (Grogan personal communication, [52]). Since 1967, one million square miles of spring snow cover has disappeared from the northern hemisphere, an area roughly the size of Argentina [28]. This change in the global snow cover has a significant impact on food production. Reduced seasonal runoff cause increased reliance on ground water across the world for sustained agricultural production, leading to land subsidence in some parts of the world [70].

NASA sensors like Advanced Microwave Scanning Radiometer-2 (AMSR2) and the Airborne Snow Observatory (ASO) can measure snow water equivalent (SWE) remotely. AMSR2 provides 99% coverage of Earth every 2 days, providing a SWE retrieval at 25 km global resolution with about 80% accuracy over flat areas covered in dry snow. Also, ASO can provide SWE measurements at a spatial resolution of 50 m with an accuracy of 5–8% over limited geographic regions [37].

There are about 800 snow telemetry (SNOTEL) sites located in remote, high-elevation mountain watersheds in the western U.S. as a part of the U.S. Department of Agriculture (USDA) Natural Resources Conservation Service. These sites provide valuable information to forecast downstream water supply. Some stations also include a snow pillow, which records the weight of the snow on top of it, and thereby the water equivalent, but these sites are limited to flat ground and do not represent the terrain very well [36]. Remote sensing of SWE by airborne instruments like ASO provide an alternative to better understand the entire picture for effective management of water resources during both dry and high snow pack years.

One way to calculate SWE is to multiply snow depth with its density over a snow covered area. However, direct measurements are often lacking especially in remote areas. Therefore, agroclimatologists use remotely sensed measurements and models to infer where there might be flooding when snow melts, and how much water can be expected for irrigation during the growing season [74].

SWE is monitored both for its potential to give advanced warning of natural disasters such as flooding due to rapid melting of winter snow in spring, but also its beneficial role as much needed water supply and is thus used in crop monitoring and early warning activities (e.g., GEOGLAM Crop Monitor; FEWS-NET). The impact of drought on crop revenues in California alone was $856 million in the year 2015 [56]. Monitoring and understanding SWE using ground measurements, remote sensing, and modeling allows scientists to better forecast changes in SWE.

7 Observable: Soil Moisture

Soil moisture, defined as the amount of water stored in the soil profile, is an essential climate variable that plays a key role in the Earth’s water, energy, and carbon cycles. Soil moisture is a dynamic boundary condition between the land surface and atmosphere and controls the exchange of water and heat fluxes and storages between the land surface and the atmosphere. Thus, soil moisture has important impacts on water availability, ecosystem exchange processes, vegetation growth, and more. To this end, the availability of adequate and timely soil moisture information is of great importance for numerous applications, including weather forecasting, and drought and flood mapping which are tightly linked to crop health and yield formation monitoring. Water availability is also vital for crop growth and yield formation. Timely, within-season information on expected end-of-season crop production is critical for food security and related decision-making activities as well as identifying approaches for reducing the yield gap. Change in soil moisture conditions is a direct response to weather variability and can be used to detect the occurrence of water-related stress that can potentially hamper plant growth and lead to suboptimal yield production.

Soil moisture monitoring can be achieved through the following techniques [79]:

  1. (1)

    Ground-based monitoring using in situ sensors:

    Observations collected using in situ stations characterize with high accuracy, but provide limited spatial coverage.

  2. (2)

    Satellite-based soil moisture estimation using radiative transfer modeling:

    This approach generates reliable global datasets with typical accuracy of 0.04 m3/m3. The corresponding soil moisture estimates are representative of the top few centimeters of the soil profile (2–5 cm). Temporal coverage is limited to the operational life span of the mission, which typically do not enable long-term stable climatologies based on individual sensors. Several passive- and active-based systems currently provide operational global soil moisture data sets, including AMSR2, SMOS, SMAP, and ASCAT.

  3. (3)

    Model-based estimation using water or energy balance models:

    The model-based approach provides data with global coverage. Reliability of the model-based soil moisture observations is highly susceptible to the accuracy of the precipitation quality. GLDAS, NLDAS, and FLDAS generated by the NASA’s LDAS system are examples of model-based soil moisture data products [75, 137, 138], detailed in the later section on modeling and assimilation.

  4. (4)

    Soil moisture monitoring using data assimilation techniques:

    These datasets are generated by integrating airborne- or satellite-based soil moisture observations into a hydrologic model, which enhances the model performance and corrects for precipitation related inaccuracies [26]. Examples of such data sets are the SMAP L4 Root-zone soil moisture and the NASA-USDA Global Soil Moisture Data [26, 79, 105]. The latter is operationally used by the U.S. Department of Agriculture-Foreign Agricultural Service (USDA-FAS) for assessing the impact of drought on crop production (Figs. 5 and 6) and generating the agency’s global crop statistics. These data are also utilized by the U.S. Department of Agriculture-National Agricultural Statistics Service (NASS) and the Group on Earth Observations Global Agricultural Monitoring (GEOGLAM).

Fig. 5
figure 5

Sub-surface soil moisture (SM) anomalies over South Africa developed using the USDA-FAS Palmer model and satellite observations from the NASA Soil Moisture Active Passive (SMAP) mission. South Africa has been experiencing a decline in rainfall, which reached record low amounts during the 2017 growing season, and had an adverse impact on the wheat production in the area. This is captured by the negative anomaly values (i.e., brown colored end of the scale bar) indicative of water deficiency for crop production

Fig. 6
figure 6

Sub-surface soil moisture (SM) anomalies over Australia at the end of July 2018 developed using the USDA-FAS Palmer model and satellite observations from the NASA Soil Moisture Active Passive (SMAP) mission. Soil was especially dry over New South Wales where the drought had impacted large areas of grazing and cropland

8 Observable: Evapotranspiration

Evapotranspiration (ET) describes the exchange of water vapor between the land surface and the atmosphere and includes water evaporated from the soil, water bodies, and other surfaces (E) and water used by plants through the process of transpiration (T). ET is central to processes that constrain agricultural food production, linking the energy, water, and carbon cycles in mutually dependent relationships [46]. An increase in energy (i.e., lengthening days, reduced cloud cover) favors carbon assimilation through photosynthesis (primary production) and also increases ET, extracting available water from the soil, representing the largest component of consumptive water use in the US.

If the soil water is not replenished through rain or irrigation, plants close their stomata to conserve water and primary production is reduced. The associated reduction in transpiration shifts the surface energy balance from latent heat (water exchange with the atmosphere) to sensible heat (heat exchange). By comparing observed ET to a modeled expectation of crop water requirements, ET observations can be used to schedule irrigation applications and improve agricultural water management. In rain-fed agriculture, reductions in actual ET are often a leading indicator that drought may impact food production [10, 11, 86]. Similarly, the link between transpiration and primary production can be used to inform agricultural yield predictions, and assess agricultural water use efficiency (crop per drop).

Despite the importance of ET in understanding the agricultural food system, it is also one of the least constrained components of the hydrological cycle. The lack of regular, spatially dense ET observations makes ET the greatest remaining data gap in water resources management. ET may play a key role in providing accurate and timely drought forecasts to water managers [47]. The ET-based Evaporative Stress Index (ESI) [6, 7, 9] is one of the few drought metrics to capture the magnitude, intensity, and timing of the 2012 US drought at resolutions applicable for management (~ 5 km) [86]. For retrospective studies, there are several other approaches available at spatial resolutions on the order of 25 km, e.g., the LandFlux evaluation [59, 81]. Continental scale estimates of ET are based on more readily available meteorological and hydrological observations and require a significant process model or statistical framework. Long records of these observation-based estimates improve our understanding of the feedbacks within the climate system that directly affect our food system (e.g., [65, 78]).

Because ET can differ from field to field, a spatial resolution of 50–100 m is needed to infer actionable information for individual farmers. At that resolution, the most direct diagnostic of ET is the surface temperature observed through thermal infrared sensors, most notably Landsat. There are various different approaches with long legacy that estimate ET from surface temperature observations in combination with an analysis of the surface energy balance (EB). Many of these approaches have found wide application in agricultural studies or water management applications (e.g., [3, 8, 21]).

The first group of larger scale EB approaches treat evaporation as a single bulk flux that includes soil and vegetation sources, and applies a scene-based scaling (e.g., SEBAL [20], SEBS [117], METRIC [4], and SSEB [110]). These approaches evaluate the energy balance at “dry” and “wet” extremes and estimate ET between these extremes based on the spatial variation of internally calibrated temperature within the scene of the satellite image. In order to also assess agricultural water use efficiency, it is essential to distinguish between beneficial water use (transpiration) and non-beneficial water use (evaporation from the soil). Two-source EB approaches consider soil and vegetation as separate “sources” for heat and water exchange [63, 64, 84]. ALEXI/DisALEXI combines the regional scale ALEXI ET estimate with high-resolution observations (e.g., Landsat).

The processing and calibration of large sets of Landsat images is computationally demanding and impacts the availability and latency of high-resolution ET estimates for stakeholders. The use of cloud computing now allows for the processing of Landsat images at a greater scale. An example of this is the adaptation of METRIC to work on Google Earth Engine (GEE) allowing for the calibration of Landsat images with weather station data, and generation of Earth Engine Evapotranspiration Flux (EEFlux) [5]. Now anyone with Internet can access Landsat data, choose a location, and see an evapotranspiration map within seconds. The OpenET effort builds on the initial success of EEFlux, adding additional ET models (both single- and two-source) to a GEE framework for ensemble assessment of predicted consumptive water use. OpenET will allow ready intercomparison between multiple high-resolution ET models over a broad range in climatic and vegetation cover conditions, enabling users to select a model that performs best in their area of interest or extracting a multi-model average.

9 Observable: Water Quality

Water quality is as important to food production as water quantity, but is harder to measure from space because many of its characteristics are invisible. Fresh and clean water is needed for agriculture production while fresh or salt water with a balanced, healthy ecosystem is critical for aquaculture as a sustainable food source. Land use choices control nutrient, sediment, salt, and pollution runoff to water bodies. When those are impacted in a significant way, restrictions have to be imposed on agriculture to improve water quality. In this way, water quality can also impact water availability for agriculture. Additionally, the quality of water in catchments and reservoirs is important for healthy crops and livestock.

In recent years, coastal and inland water quality has been declining with population growth, expanded human activities near waterways, and climate change [124, 125]. In the U.S., the declining quality of freshwater systems has led to estimated annual economic losses of $4.6 billion for sectors including agriculture, aquaculture, and fishing, as well as tourism, real estate, and healthcare [33]. Other parts of the world have greater population pressure on their water quality, e.g., from raw sewage. Worldwide, the combination of warmer temperatures, increased intensity of storms causing flooding, erosion, and an overabundance of nutrient runoff from land have compromised adjacent waters with severe environmental impacts such as harmful algal blooms, dead zones with little or no oxygen, and the loss of biodiversity.

Harmful blooms of blue-green algae or cyanobacteria respond quickly to ecosystem changes and are an increasing problem due to warming temperatures and water column stratification combined with excess nutrients [87]. These harmful algal blooms have become a global health issue through fish and shellfish diseases and mortality as well as illness in humans and animals that eat them [13, 51, 77]. Livestock drinking water containing cyanobacteria can suffer reductions in growth, lactation, and reproduction or even mortality. Consuming fresh vegetables that have been irrigated with water containing cyanobacteria can also cause illness and mortality in humans.

The importance of water quality for food safety and security lends urgency to the need to remotely sense its parameters. Land use change, urban sprawl, ecosystem health, vegetation, and crop cover have been monitored by the Landsat Thematic Mapper and Enhanced Thematic Mapper at 30 m resolution about twice a month for several decades. Although not optimized for aquatic measurements, the NASA/USGS Landsat 8 Operational Land Imager (OLI) has added new spectral bands that can be applied to water resources and coastal zone investigations of water clarity, turbidity, chlorophyll-a, and surface temperature [49, 88, 89]. Furthermore, the frequency of these 30 m measurements can increase toward 3 days when Landsat OLI is harmonized with the European Space Agency (ESA) Sentinel-2 Multi-Spectral Imager (MSI) [32]. Water quality indicators derived from these sensors are gradually being applied to aquaculture decisions. One of the earliest attempts addresses water clarity. Since 1866, water clarity has been quantified at discreet locations by the depth at which a Secchi disk lowered into the water from the surface disappears from view [91]. The deeper the Secchi depth, the better the water clarity. Satellite data are now used to remotely estimate this variable over large areas (Fig. 7) as a water quality indicator for fishing, crabbing, and shellfish aquaculture sites [114].

Fig. 7
figure 7

Secchi depth of the upper Chesapeake Bay and several tributaries derived from the Landsat OLI (left) and the same April 13, 2016 Landsat scene in true color (right). Credit: Lachlan McKinna and NASA Earth Observatory

Aquatic ecosystems in the open ocean have been continuously monitored from space for the past 20 years by NASA ocean color spectrometers [73]. These satellite-borne sensors were designed to provide a nearly daily view of the open ocean where sampling opportunities are rare and expensive. The color measured at the ocean’s surface is used to derive chlorophyll-a concentrations, the primary photosynthetic pigment in phytoplankton. Continuously monitoring the whole Earth from the visible to near-infrared portions of the spectrum at 1–10 km spatial resolution advanced our understanding of mechanisms fostering global primary production. Ocean color sensors were not optimized for monitoring water quality in coastal and inland waters where the myriad of constituents in the water and overlying atmosphere are optically challenging, further confounded by land adjacency effects and their spatial resolution is too large for most inland water bodies. These technical issues as well as confidence in satellite data continuity have limited their adoption by water quality managers [80, 106]. Yet the great demand for this information has led to some clever adaptations in the coastal ocean, large estuaries, lakes, and rivers. Remotely sensed observations from the visible to near-infrared portions of the spectrum include water clarity, turbidity, sediments and detritus, chlorophyll-a, and other pigments indicating phytoplankton biomass and community composition, shallow submerged and floating aquatic vegetation, surface oil slicks, and other variables estimated or inferred through regional correlations between field measurements and remotely sensed proxies (e.g., harmful algal blooms) [58, 83, 106]. Additionally, surface temperature from remotely sensed infrared measurements is another important variable related to water quality. Invisible variables that cannot be directly sensed remotely include nutrients, dissolved oxygen, acidity or pH, microbes, and pollutants.

Although satellite observations do not detect the presence of toxins, they are useful for estimating cyanobacterial abundance and directing in situ sampling [116]. The ESA MEdium Resolution Imaging Spectrometer (MERIS) spectrometer, 2002–2012, followed by the Ocean and Land Color Instrument (OLCI) sensor on the Sentinel-3 that launched in 2016 [35], were designed with additional spectral resolution that enables the detection of algal blooms of cyanobacteria (Fig. 8) [77, 115]. Increased spectral resolution by MERIS followed by OLCI enable monitoring the likelihood of these cyanobacterial harmful algal blooms and their frequency of occurrence, yet have been limited to about 6% of continental U.S. freshwater lakes and reservoirs by their 300 m bin size [31, 128]. Thirty-meter bin size would resolve more than 60% of freshwater lakes and reservoirs [31]. Thus, a combination of sensors with additional spectral resolution and new methods to synthesize multiple types of measurements could improve this coverage in the future.

Fig. 8
figure 8

NOAA Lake Erie Harmful Algal Bloom Bulletin for July 30, 2018 shows medium cyanobacterial density in the southwestern lake, with a threshold for cyanobacteria detection of 20,000 cells/ml. Gray indicates clouds or missing data. Source: http://tidesandcurrents.noaa.gov/hab/lakeerie.html

Airborne and upcoming satellite-borne hyperspectral remote sensing present options for the detection of dissolved organic carbon and additional water quality variables (e.g. [45, 71]). After 2022 when NASA launches the Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite, information from its hyperspectral Ocean Color Imager at 1 km resolution may be combined with higher spatial resolution data, and perhaps LiDAR for vertical information [24]. Coupling these sophisticated synoptic observations with in situ biophysical and bio-optical measurements and long-term datasets from sensor networks and monitoring programs will inform water resource planning to address goals of water and food security, biodiversity, and sustainable ecosystem management [54, 80, 109]. Challenges to global water quality monitoring by satellites remain, yet increasingly accessible Earth observations have the potential to significantly advance near-real-time water quality indicators in support of decisions related to food production and security around the world.

10 Observable: Air Quality

Food security programs usually focus on water, nutrition, and disruptions to food distribution systems, while the impact of air pollution on crops is often overlooked. The economic impact of crop yield loss due to pollution is significant all over the world, including in regions that experience food insecurity [2, 16, 130]. Most losses occur from one pollutant, tropospheric ozone (O3), which can lower photosynthetic rates and decrease yield and yield quality [2]. Emberson et al. [40] define O3 damage hot spots as regions with more than 3 months exposure to surface O3 concentrations above 44 ppbv. Global crop yield losses for wheat, corn, and soybeans are estimated to range from $11–26 billion (U.S. 2000) annually, with the greatest economic loss estimated to occur in the United States ($3.1 billion). Yield reductions may be as high as 50% for some crops in highly polluted areas such as India [29]. The greatest economic loss is estimated to occur in the United States ($3.1 billion) despite the fact that scientists have been working with farmers for decades to identify and propagate O3-tolerant varieties for high crop productivity (e.g., [2]). Crop losses associated with air pollution exposure are projected to increase for many world regions over the next decade, including in areas most vulnerable to food insecurity [17].

Surface-level O3, at elevated concentrations above injury thresholds, reduces crop yields following uptake through a plant’s stomata (i.e., tiny pores on the lower leaf surface) and chemical reaction with plant cells. O3 injury to plants is evident often as a fine tan to dark colored stippling pattern on the upper leaf surface that accumulates throughout the growing season (Fig. 9). However, the impact of O3 on plants is not always obvious to the naked eye. When O3 air pollution exceeds injury thresholds during air stagnations, the pollutant can seriously affect overall plant health, ultimately reducing growth and yields. This effect is referred to as “hidden” O3 injury.

Fig. 9
figure 9

Characteristic O3-induced injury on the topside of green bean plant leaves. The stippling, which does not occur on veins, is associated with dark pigments accumulating within injured cells. O3 injury symptoms often vary with different crops. Photo Credit: Emerson Sirk/NASA

Although it is not currently feasible to infer surface O3 from satellite data of O3, satellites provide information on the chemical precursors that lead to O3 formation, including nitrogen oxides (NOx = NO + NO2). NOx occurs naturally in the atmosphere, but human activities, such as the burning of fossil fuels, elevate its concentrations, allowing unhealthy levels of surface O3 to form. Nitrogen dioxide (NO2) serves as a proxy for NOx and is observable from space [66, 132]. Satellite data of NO2 are used as input to computer simulations of atmospheric chemistry and transport to estimate surface O3 pollution. These simulations give valuable information on O3 levels in agricultural areas, the long-range transport of O3 from urban to agricultural areas, and how O3 levels are evolving over time. Current and future O3 concentrations can then be fed into crop modules equipped with next-generation O3 response modules, enabling a more detailed examination of plant response to elevated O3 during different phenological stages or in combination with additional drought and heat stresses [40]. The model output may then be used to inform stakeholder decisions related to agricultural planning and air pollution management.

NO2 data from the Dutch-Finnish Ozone Monitoring Instrument (OMI), a spectrometer that observes solar backscatter radiation in the visible and ultraviolet wavelengths, have given us an unprecedented look at how NO2 has varied around the world, including over agricultural regions (Fig. 10); OMI is on the NASA Aura polar-orbiting satellite, which was launched in 2004 [38]. Several new satellite instruments of similar heritage as OMI were recently launched or are nearing launch and promise to provide even better NO2 data. For instance, the ESA TROPOspheric Monitoring Instrument (TROPOMI; launched in 2017) on the polar-orbiting Sentinel-5 Precursor satellite collects data on NO2 at sub-urban spatial resolutions (e.g., a few kilometers), a much finer resolution than OMI [131]. Additionally, a fleet of satellites in geosynchronous orbit over East Asia (Korean Space Agency Geostationary Environment Monitoring Spectrometer (GEMS)), North America (NASA Tropospheric Emissions: Monitoring Pollution (TEMPO)), and Europe (European Space Agency Sentinel-4) will provide much needed information on how air pollutant concentrations and emissions vary throughout the day; launches are expected in the early 2020s. Given the potential of air pollution to increase with projected population growth in the tropics and subtropics, geosynchronous satellites with similar capabilities are needed over the megacities and agricultural regions of the tropical and subtropical land masses as well.

Fig. 10
figure 10

From [38], OMI data show that NO2 levels have decreased from 2005 to 2016 by about 20–60% over most U.S. cities as a result of environmental regulations. As a national average, surface monitors indicate that O3 decreased by about 15% as a consequence, good news for both human and plant health. However, increasing trends in O3 pollution in other regions of the world pose a threat to food security

Observations of atmospheric ammonia (NH3) from satellite instruments give complementary information to NO2 data by indicating when and where nitrogen-based fertilizers are applied [135]. While thermal power plants and automobiles are the dominant NOx sources, the application of nitrogen-based fertilizers may also be an important source of NOx to the atmosphere in agricultural regions, potentially allowing high levels of surface O3 to form. Instruments observe NH3 using infrared wavelengths, e.g., IASI [30], CrIS [112], and AIRS [135]. While the impact of O3 pollution has a clear, negative impact on plant health, the impact of particulate matter (PM) pollution from dust and smoke is more complicated (e.g., [107]). Depending on concentration, PM in the atmosphere can either reduce or enhance crop yields by scattering light. For instance, PM can diffuse sunlight, creating a more even and efficient distribution of photons, which can offset the haze-induced reduction in total sunlight reaching the plant.

11 Physical Model: Hydrology Data Assimilation

Monitoring and forecasting drought and its impacts on crops requires an objective definition of drought or a “convergence of evidence” process by which drought may be defined. The Land Data Assimilation System (LDAS) is an effort that take many of these satellite-derived observations and assimilate them with other observations and model output for use in regularly gridded retrospective and current assessments and forecasts.

The NASA Land Information System (LIS) software provide data to both NOAA’s North American Land Data Assimilation System (NLDAS) Drought Monitor and the associated National Integrated Drought System [137, 138], and FEWS-NET via the FEWS-NET LDAS (FLDAS; [75]). These LDAS systems use optimal inputs (forcing and parameters) to produce estimates of the water balance (precipitation, runoff, evapotranspiration, soil moisture) and energy balance (evapotranspiration, temperature, radiation). These data can then be used to derive indices, like soil moisture anomalies (Figs. 11 and 12) and QuickDRI that inform drought and crop growing conditions.

Fig. 11
figure 11

Within FEWS-NET: FLDAS Soil moisture anomaly derived from CHIRPS rainfall, MERRA-2 meteorology, forcing the Noah land surface model. Source: USGS/EROS

Fig. 12
figure 12

a NLDAS Soil moisture anomaly derived from NCEP’s Eta model-based Data Assimilation System (EDAS) [95] meteorological forcing, and a merged precipitation product derived from stations, radar and reanalysis, and the Noah land surface model. b The QuickDRI is derived from NLDAS soil moisture, in addition to evapotranspiration, precipitation, and vegetation conditions from other sources

In 1999, the U.S. Drought Monitor (USDM) was established as a weekly map of drought conditions produced jointly by the National Oceanic and Atmospheric Administration (NOAA), the U.S. Department of Agriculture (USDA), and the National Drought Mitigation Center at the University of Nebraska-Lincoln. Internationally, the Famine Early Warning Systems Network (FEWS-NET), established in 1985 by US Agency for International Development (USAID), produces a weekly map of drought conditions for Africa, Central Asia, and Latin America. Other international drought monitors include the Middle East and North Africa Drought Platform (e.g., [1, 111]). A number of other organizations collate data from other sources, including FEWS-NET, e.g., the UN Food and Agriculture Organization (FAO) Global Information and Early Warning System on Food and Agriculture (GIEWS), the Global Drought Information System Portal [90], and the GEOGLAM Crop Monitor led by the University of Maryland [22].

Given that different Earth observation products that rely on various sensors and models may not agree, analysts use a “convergence of evidence” approach. Evidence from different products is weighed by experts, who ultimately decide the classification and extent shown in both on the US Drought Monitor and FEWS-NET Hazards maps. The US Drought Monitor employs a classification scheme where a category/description has associated impacts as well as thresholds for different metrics including the Palmer Drought Severity Index (PDSI), soil moisture percentiles, streamflow percentiles, the standardized precipitation index (SPI), and a composite index. Similarly, FEWS-NET has criteria for determining levels of dryness that increase in severity from abnormal dryness, to drought, to severe drought. The criteria for “drought” classification for example are (1) the area must have previously been defined as “abnormal dryness”; (2) are must reginate season precipitation, soil moisture, and runoff deficits below 20th percentile; and (3) reports of developing drought conditions and impacts on crop and water resources from the field.

The NLDAS and FLDAS system are updated routinely and provide long-term estimates of relevant conditions so that standardized indices and percentiles (i.e., precipitation, soil moisture) can be computed and provide decision support to analysts that generate the drought hazard maps.

12 Impact Models: Retrospective, Real-Time, and Future Analysis of Crops

Process-based crop models simulate day-to-day crop growth and development over the course of an agricultural season in response to environment, management, and genetics as determined by fundamental biophysical processes [61]. Environmental drivers include conditions within the soil profile (texture, temperature, and moisture within 5–10 soil layers extending to nearly 2 m below the surface) and surface meteorology (typically daily maximum and minimum temperatures, precipitation, and solar radiation; more advanced models also include relative humidity or vapor pressure and wind speed). Management information includes data on planting (dates, row spacing, row depth, etc.), inputs (irrigation, fertilizers), and harvest (equipment and limiting dates). Genetic information describes the fundamental traits of the crop variety (characteristics universal to a given species and those specific to the selected cultivar, typically represented as genetic parameters). Crop development depends on balanced flows of water, energy, carbon, and nutrients, which drive and respond to crop processes depending on phenological stage and the potential presence of stress factors (e.g., water, temperature, or nitrogen stress). Crop models can predict yield and resource use (water and nitrogen) to help optimize current and alternative systems under a variety of priority criteria.

Earth information is critical to the configuration, evaluation, and application of crop models to meet a variety of stakeholder needs. Remote sensing data can determine the date and area planted for many crop species. In situ networks and remote sensing platforms provide meteorological observations, while weather and climate models fill in gaps and expand beyond observations with forecasts and projections. Crop models are often quite sensitive to common biases within atmospheric models, requiring additional bias-adjustment for improved fidelity [101, 102]. Simulated crop progress and status may also be compared against field and remotely sensed observations of crop conditions.

A well-configured and -evaluated crop model serves a variety of stakeholder-driven applications that range across a continuum of time scales and alternative farm systems. Models operating under historical conditions utilize (and potentially assimilate) multiple observations to attribute observed anomalies, establish climatological expectations, and potentially reconcile biases across diverse observational datasets within a physically consistent crop process framework. Crop models applied in the near-real-time contribute to monitoring and early warning efforts while also potentially providing timely forecasts of seasonal outcomes and intervention opportunities. Crop models may also project future climate conditions, alternative farming systems, or the response to hypothetical extreme events.

The Agricultural Model Intercomparison and Improvement Project (AgMIP) is an international community of 1000+ experts working to develop agricultural system frameworks for applications related to resilient production and food security (C. [97]). AgMIP facilitates the use of cutting-edge earth information and encourages ensemble modeling activities at the field scale [14, 15, 19, 48, 69, 72, 113] as well as across global grids [39, 82, 98]. Crop model output can be combined with broader integrated assessment models to evaluate the implications of large-scale policy and investment decisions [103], can include further impact factors (e.g., pests, diseases, and ozone damages; [34, 40]), and can directly link with other disciplines, scales, and models within coordinated assessments [99, 100, 104].

13 Damage Models: Pests and Disease

Agricultural lands respond strongly to anomalies in temperature, precipitation, and solar radiation, but additional biotic and abiotic pressures can also have acute impacts on short- and long-term production with broad consequences for local and global stakeholders. Here, we examine the unique threats posed by pests, diseases, and elevated ozone concentrations affecting agricultural production, as well as the observations and models that are needed to understand and apply earth information to improve decision-making.

While there are millions of specific pests and diseases that affect crop systems, these may be generalized according to critical climatological thresholds for their spread and the ways in which they affect plants [27, 34]. Earth information can identify conditions that are conducive to pest and disease spread, as well as to recognize affected plants as an element of early warning systems that allow corrective or preparatory interventions. Pests are often limited by total rainfall amounts and annual minimum temperatures that can interfere with reproductive and development cycles. Plants are more receptive to disease when the stem and leaf canopy is wet, with key sensitivities to diurnal cycles of relative humidity and temperature as well as extended periods of precipitation or flooding. Some pest vectors and disease spores are also carried by prevailing winds, with jet stream patterns shifting affected areas from year-to-year. Analysis of these metrics helps us identify hazardous climate conditions which we can monitor, forecast, and project into the future.

Remote sensing can pick up declines in productivity and crop failures in affected areas, and technology empowers corporations and citizens to observe and document outbreaks using social media. Pest and disease modules are increasingly being added or coupled to crop models in recent years to forecast likely outbreaks and their likely ramifications and attribute observed losses, leading to new decision support systems that could help users identify and prioritize actions [34]. Pest and disease modules coupled with crop and climate models also help stakeholders understand how climate variability, such as the El Niño/Southern Oscillation, and climate change shift the probability of outbreaks, aiding in the determination of preventative measures [96].

14 Risk Models: Sector Shocks and Disasters

The agricultural system is vulnerable to a wide array of natural and man-made hazards that can disrupt production, processing, transportation, and prices with direct and indirect implications for food security. Identifying and anticipating these shock events helps stakeholders respond to ongoing disasters, prepare for likely shocks, and build resilience in order to ensure food system stability.

Decision support systems may utilize NASA earth information in attributing, monitoring, and forecasting major agroclimatological hazards. Meteorological observations and atmospheric models track heat waves, cold snaps, floods, drought, heavy storms, hail, and freezing rain events that can decrease yield, damage production quality, or kill crops before harvest can even occur. The level of shock depends on the magnitude, spatial extent, duration, and timing of these extremes in comparison to critical crop development stages. More subtle weather sequences can be equally disruptive, as illustrated by two examples. First, “false starts” to the monsoon season occur when the initiation of seasonal precipitation encourages farmers to transplant, only to watch seedlings die as dry conditions return ahead of the persistent monsoon arriving weeks later. Second, late winter warming can melt snow cover and entice blooming of fruit trees, exposing vulnerable plants to frost damage when normal winter and early spring conditions return (Grotjahn et al., In review).

Models and observational products may also be used to track important external hazards affecting the food system. Weather products can identify conditions conducive to the spread of pests and diseases, while satellites can observe their net reductions in agricultural productivity. Satellites are also important elements of response and recovery efforts following major disasters that can affect agricultural transportation networks, including hurricanes, landslides, earthquakes, tsunamis, and floods. Agricultural risks are a growing element of new efforts to examine interactions between disasters as part of the United Nations Sendai Framework for Disaster Risk Reduction 2015–2030 [126]. Nations that are a party to the Sendai Framework have also committed to increased reporting of agricultural disasters, which will provide new ground-truth datasets that may be used to develop and evaluate next-generation decision support products.

While guarding against shock and disaster risk in one’s own region is critical, it is also important to remain vigilant against shocks and disaster risk affecting distant agricultural regions given the increasingly interconnected nature of the global agricultural sector which builds reliance on food baskets and major trading partners. A diverse trade networks can act to disperse shocks but also spread risk widely given elevated global exposure and streamlined flows of goods that has tended to concentrate regions of production for key agricultural commodities. Assessment of current and future risk therefore requires regional and global disaster information to be placed in the context of markets and consumer populations while also recognizing the potential human toll of food insecurity.

The agricultural sector faces long-term shifts in its risk profile due to population growth, the rapid expansion of agricultural lands and infrastructure, socioeconomic development, technological innovations, geopolitical events, and global environmental changes including climate change and the degradation of soil and water resources. Changes in shock and disaster risk can be explored using a combination of climate projections (e.g., [108]), bias-adjustment of climate model outputs [101, 102, 121], process-based crop models [61, 97], and integrated assessment models incorporating future socioeconomic conditions [85, 129].

15 Climate Change Models

Future agricultural systems will be shaped by overlapping pressures from climate change, population growth, socioeconomic development, and technological innovation. Long-term climate impact projections also shed light on present extreme events, elucidating likely trends and shifts in probability as the climate pushes toward a new equilibrium. Anticipating agricultural production and food security implications provides critical information for policymakers debating action to mitigate climate change, but also informs a variety of current stakeholder decisions with time scales of a decade or more.

To illustrate the types of decisions under consideration today, take an example agricultural region where climate projections indicate warmer mean temperatures, declines in precipitation, and a later rainy season. Current crop varieties may no longer be suitable under the changed conditions; however, it typically takes 8 to 15 years to mass-produce targeted seeds and even longer if key traits do not already exist in current varieties’ germplasm. This region may also require new water storage and distribution facilities for irrigation that can take a decade to construct and would be expected to last for a century or more even as the climate changes. Farmers and extension services may also recognize the growing need to change farm systems toward more suitable agricultural commodities, altering value chains and the utility of established processing plants and transportation facilities. Changing climate zones and food demand will also place tremendous pressure on water, land, and energy resources, with widespread implications for food prices and agricultural encroachment into natural ecosystems.

Anticipating climate change impacts on agricultural production requires a combination of process understanding to resolve the mechanics of production and resource use changes, present-day observations of climate and agriculture, probabilistic climate scenario generation, coherent coupling between multi-disciplinary systems (climate, biophysical, socioeconomic, and geopolitical), and consistent scenarios to place climate changes in the context of other global change pressures. Earth information products provide critical information about the world’s agricultural systems, current climate [50, 101, 102], and future climate projections [101, 102, 108, 121]. Process-based crop models driven by earth information inputs are particularly useful for climate impact studies given their ability to capture non-linear responses outside of observed conditions [61].

The Agricultural Model Intercomparison and Improvement Project (AgMIP) fosters an international community of climate, crop, livestock, economics, and nutrition experts to develop and apply multi-discipline, multi-scale, and multi-model frameworks to assess future agricultural production and food security (C. [97]). AgMIP activities incorporate cutting-edge products and track the implications of climate changes and uncertainties [134] as impacts reverberate between local and global markets and the populations that depend on agricultural systems for adequate and stable food supply [100, 104, 136]. AgMIP has assessed agricultural responses to core climate change factors (i.e., shifts in temperature, precipitation, and CO2 concentrations) across local and global gridded crop model ensembles [103]. Transient simulations also elucidate shifting patterns of global production and water use [39, 98], and are useful in conjunction with pathways of agricultural system transformation that help stakeholders shape a more productive and resilient future [12, 129].

16 Conclusions

As the world’s population grows and climate changes, food security is a growing global problem, inextricably tied to water and energy, demanding a multi-sectoral, global solution. Global satellite data products and integrated models are required to better understand and manage resources in the food-water-energy nexus. Global monitoring of geophysical variables from satellites provide near-real-time quantification of the Earth system that can be assimilated into early warning and predictive tools. Here, we have highlighted several of the Earth observational products related to vegetation, water quantity, water quality, and air quality that can be combined with additional information to inform decisions around food production. Remote sensing by satellite and airborne sensors yields measurements over large areas on a regular, consistent basis, providing the ability to monitor changes over time. Published literature shows recent progress in the adoption of Earth observations for agriculture and aquaculture applications, the former more quickly than the latter. As we gradually overcome challenges associated with calibrating and validating new measurements and new applications of existing measurements, confidence in these capabilities will increase, leading to wider use and better understanding of the benefits of remote sensing in support of food security. Sensors are currently being planned and built with finer spectral, spatial, or temporal resolution that can be integrated with increasingly sophisticated data assimilation and modeling to support informed decisions by farmers, fishers, humanitarian aid organizations, first responders, and more. New and emerging science and technology can foster solutions for some of society’s challenges regarding current and future hunger, malnutrition, and instability due to food shortages.