Introduction

The changing attitude of society towards a more sustainable planet, which is nowadays termed as ‘neo-ecology’, is changing our common agriculture in a drastic way. Stockbreeding, crop cultivation and plant protection are critically re-examined in the view of environmental and human protection strategies to meet the standards of the ‘agriculture green development’ (Davies and Shen 2020). Currently, agricultural land covers approximately five billion hectares, which is 38% of the available land on our planet (annual data FAO 2018). Agriculture must be updated in some aspects to meet rigorous environmental protection targets. However, a sustainable increase in productivity is inevitable because human population is growing continuously. Due to the COVID-19 pandemic, the proportion of undernourished people even increased from 650 (~ 8.4%) up to 811 million (~ 9.9%) (annual data FAO 2021).

To guarantee sufficient food production, unnecessary production loss in agriculture must be avoided. Globally, integrated pest management (IPM) has reduced harvest losses of the five major food crops (i.e., wheat, rice, maize, potato, soybean) to 20–40% which are attributed to plant pathogens and pests (Savary et al. 2019). Unfortunately, most fields are too large for growers to cost-effectively monitor yield-reducing causes, such as diseases, at regular time intervals. In addition, the detection and exact determination are complex. The field of remote sensing offers methods for high temporal- and spatial-resolution monitoring, that can be used to efficiently deploy ground analysis and remediation action to diseased plants before financial losses incur and disease epidemics emerge. The application of remote sensing methods in plant pathology detection is based on the fact that plant pathogens and pests change the way light interacts with leaves and canopies.

Remote sensing, at its core, is the use of non-contact, often optical sensors such as RGB, multi- and hyperspectral, thermal, chlorophyll fluorescence, and 3D-imaging, to obtain information about processes occurring in the natural and artificial landscape. Optical sensors offer the opportunity for non-destructive disease monitoring at different scales (Mahlein 2016). Next to common techniques for plant disease/pest monitoring, which range from molecular assays to smartphone applications, sensors optimize and reduce the human effort of disease detection in the field (Silva et al. 2021). Though, seemingly straightforward, disease detection, using remote sensing methods in the field, can be complex. Plant diseases themselves are complex as well. They often exhibit a heterogeneous distribution within crop stands and are highly dynamic in time and space due to dynamic interactions between living organisms within an ever-changing environment. Some of the most current challenges, research topics and achievements of digital plant pathology are summarized in Fig. 1, from a phytopathology perspective. It should highlight, that the main goal of digital plant pathology must be to manage farmer’s needs.

Fig. 1
figure 1

Achievements, challenges, and current research of digital plant pathology for adaption into the field practice. Challenges are to capture and explain the complexity resulting from the triangular relationship of sensor, pathogen, and environment. Implementing new methods is hindered by the lack of plant protection and the growing resistances. The analysis of big data is labor-intensive and needs sophisticated data-driven approaches, which can only be sufficiently interpreted by a multidisciplinary team. Currently, the development of agricultural robots, which can detect, assess and operate autonomously, is a research focus and, in the view of weeding, are very promising. Personal consulting is a driving force to introduce new technologies and digital possibilities into agriculture. Thereby, computer/software approaches, as well as smart solutions enable fast and interconnected access to global data

Therefore, we are aiming at providing a potential new direction for digital plant pathology research. We are taking a look at some milestones of digital plant pathology and explore the state-of-the-art imaging techniques and analysis methods. With this insight, we are creating a snapshot of the current technical state of applied digital plant pathology and ask the question if we already reached the goal of optimizing manual disease detection.

Digital plant pathology

Almost a century ago,  in 1927, Neblette showed that aerial photography (RGB) enables disease survey in agricultural crops. In 1933, Bawden discovered in the lab that a black-and-white representation of an infrared photography resulted in high contrasts between necrotic leaf spots caused by potato viruses. The infrared images were compared to panchromatic (i.e., black-and-white images sensitive to all wavelengths of visible light) images and no obvious contrast was visible. When the same was done with tobacco leaves, the opposite happened, and panchromatic images showed the greatest contrast compared to infrared filter images. The differences were explained by the different makeup of the necrotic areas. Necrotic cells in potato contained chemical break-down products while necrotic cells in tobacco were merely dead empty cells that differed in color compared to the rest of the leaf cells. These findings set the stage for the use of different spectral bands to detect differences in plant health.

Technical development of optical sensors increased and Colwell (1956) remotely determined wheat rust and other diseases of grains by using military helicopters and infrared-filter cameras, as well as a spectrometer at oblique and nadir observation angles. Colwell suggested to test different combinations of spectral bands for disease detection. Based on the literature and his investigated photos and spectral reflectance curves, he proposed a new view on the interaction of light with plants and the assessment and interpretation of crop photos for plant diseases. Colwell contributed an important theoretical framework that is still of utmost importance in digital plant pathology.

Since 2000, the idea of “foliar functional traits” has strongly emerged as a unifying concept in terrestrial remote sensing to better understand both natural variabilities in vegetation function and variability in response to stress (DuBois et al. 2018). Many traits shown to strongly correlate with natural and stress-induced variation in plant function (Wright et al. 2004) can be quantified and mapped with imaging spectroscopy (Townsend et al. 2003; Ustin et al. 2004; Asner and Martin 2009; Ustin and Gamon 2010; Heim et al. 2015; Wang et al. 2020). Originating in terrestrial ecology, the use of spectroscopy combined with chemistry and taxonomy has been coined as “spectranomics” (Asner and Martin 2009, 2016; Zhang et al. 2020). The foundational components of this approach are: (i) plants have chemical and structural fingerprints that become increasingly unique when additional constituents are incorporated (Ustin et al. 2004) and (ii) spectroscopic signatures determine a portfolio of chemicals found in plants (Jacquemoud et al. 1995).

When applied to plant disease, spectranomics allows for accurate and non-destructive detection of direct and indirect changes to plant physiology, morphology, and biochemistry which induces the disease, both pre- and post-symptomatically (Arens et al. 2016; Couture et al. 2018; Fallon et al. 2020; Gold et al. 2020). Beneficial (Sousa et al. 2021) and parasitic (Zarco-Tejada et al. 2018) plant–microbe interactions impact a variety of plant traits that can be remotely sensed. Changes to narrowband wavelengths, have proven valuable for plant disease sensing due to their sensitivity to a range of foliar properties (Curran 1989). The ultraviolet range (UV; 100–380 nm) is influenced by secondary plant metabolites, while the visible range (VIS; 400–700 nm) is influenced by primary metabolites such as pigments. Internal scattering processes and the structure of a leaf alter the near-infrared range (NIR; 700–1000 nm) while chemicals and water show alterations within the short-wave infrared (SWIR; 1000–2500 nm) (Carter and Knapp 2001). This means, that the nutrient content (Gillon et al. 1999; Zhai et al. 2013; Singh et al. 2015; Wang et al. 2016, 2020), water status (Gao 1996), photosynthetic capacity (Oren et al. 1986), physiology (Serbin et al. 2019), phenolics (Kokaly and Skidmore 2015), secondary metabolites (Couture et al. 2013, 2016) and leaf and cell structure (Mahlein et al. 2012; Leucker et al. 2016; Kuska et al. 2015, 2017), which are changed by diseases are displayed in changes of the spectral reflectance. The foundational spectranomics approach offers an explanation as to why sensing technologies are capable of disease detection in the first place. Remote imaging spectroscopy assesses the sum impact of the fundamental biochemical, structural and physiological processes that underlie the diseased plant phenotype (Mahlein et al. 2012, Leucker et al. 2016, Kuska et al. 2017, 2018a, 2018b, 2019, Zarco-Tejada et al. 2018, 2021; Asner et al. 2018; Sapes et al. 2021). Further ranges of the electromagnetic spectrum can also provide interesting information, but often it is not possible to characterize the determined changes to a specific cause (Mahlein 2016; Simko et al. 2016). As an example, infrared (8–12 µm) light can be determined with thermal cameras, which return a “calibrated” temperature of the plant. The temperature of plants correlates very strongly with the transpiration rate. In addition to recording the water balance of the plant or the crop, this enables the detection of potential drought stress before it becomes visible. Although the sensitivity of thermography and chlorophyll fluorescence sensors is very high, both techniques lack of the possibility to differentiate between abiotic or biotic stress and with it of a causal connection to a specific disease (Mahlein 2016; Simko et al. 2016). However, a combination of sensors can indeed enable a specific characterization of plant diseases.

Within the last couple of years, Zarco-Tejada et al. (2018) were able to use a combination radiative transfer and machine learning approach (Hernández-Clemente et al. 2019) to pre-symptomatically detect Xylella fastidiosa infection in olive trees. This was achieved through a combination of hyperspectral NIR, thermal, and solar-induced fluorescence measurements. The authors found that spectral-plant trait alterations in response to X. fastidiosa infection in both spectral stress indicators and pigment degradation traits, particularly the chlorophyll degradation phaeophytinization-based spectral trait (NPQI), were essential for distinguishing asymptomatically infected plants from both symptomatic and healthy plants. Following up on this work, the authors found that NPQI was only indicative of asymptomatic X. fastidiosa infection in irrigated almond groves. This eventually led to the discovery of the existence of divergent pathogen- and host-specific spectral pathways in response to abiotic and biotic stresses that yield a similar visual manifestation (Zarco-Tejada et al. 2021). Even though both drought and bacterial infection cause the plant to wilt, the mechanisms by which they do so are different, and this difference could be captured with spectroscopy. The authors then used the thermal crop water stress index (CWSI) to uncouple the confounding interaction to improve their misclassification accuracy from 37% and 17% to 6.6% and 6.5%, respectively. By assessing spectral trait measurements that captured the underlying physiochemical origin of their diseased plant phenotype, the authors were able to develop a robust disease detection and differentiation methodology for mapping asymptomatic X. fastidiosa infection in multiple crops at scale. This success bolsters and lends hope to ongoing investigations that seek to detect diseases in real-world, multi-stress environments (Fig. 1).

Disease management in the field: can spectral imaging provide the required digital information to control plant diseases and pests?

For disease management, weather-based consultation and forecasting systems (e.g., proPlant Expert.com; RANTISMA), enable the best plant protection measures by their warning services of appearing pests and diseases, since the early 1990s (Newe et al. 2003). The manual field check by the farmer is still necessary, but with digital consulting systems, time management and the process for a successful plant protection measures is optimized (summarized in Damos 2015). However, many techniques and methods are still labor-intensive, and therefore, further progress is necessary. Nilsson (1995) already concluded in his review that remote sensing offers a wider range of sensors and application scales ranging from satellites to ground-based platforms. Nevertheless, depending on the scale, pre-symptomatic and disease-specific detection, as well as the influence of the environment remained a major challenge (Mahlein et al. 2012). This is based on the fact, that plant–microbe interactions are subtle changes in biochemistry and structure. The interactions can be described in compatible (plant pathogenesis) and incompatible (plant resistance response) interactions. To differentiate pathogen attack symptoms, resistance reactions, abiotic stress and spectral signatures of healthy leaves, each of these states had to be characterized in detail (Carter and Knapp 2001). Multi- and hyperspectral imaging is the preferable technique to study such interactions from the cell level to the canopy (Bohnenkamp et al. 2019a, b, 2021).

Variances within and between spectral reflectance signatures were already remotely determined with Landsat-2 imagery. It was used to monitor an epidemic in Pakistan in the late 1970s, the first -ever use of space-borne sensing to monitor disease (Nagarajan et al. 1984). However, a better spatial resolution was needed to precisely explore infections in the field, especially to characterize a pathogen. As the equipment to provide higher resolution was still in an early development stage , spatial resolution, spectral resolution, and costs were closely related. For instance, the amount of generated film could not be stored at reasonable costs, was tedious to analyze as human raters had to screen the images, and no computers were available to perform pixel-wise calculations. The overall progress of remote sensing for abiotic and biotic plant stress was summarized by Jackson (1986).

Two decades later, Chaerle et al. (2007) analyzed resistant tobacco plants, and those susceptible to the tobacco mosaic virus; also, they looked at Cercospora beticola on sugar beet using thermal imaging and chlorophyll fluorescence. They enabled a pre-symptomatic detection and indicated that their studied plant-pathogen interactions could be distinguished. Next, studies using hyperspectral imaging showed that it was possible to discriminate and characterize symptoms of sugar beet diseases, such as Heterodera schachtii, Rhizoctonia solani, Cercospora beticola, Uromyces betae and Erysiphe betae (Hillnhütter and Mahlein 2008; Mahlein et al. 2010; Hillnhütter et al. 2012). Hyperspectral imaging (HSI) further enabled the research community to get a deeper understanding of plant-pathogen interactions. Leucker et al. (2016) were able to display different disease severities of sugar beet leaves inoculated with C. beticola caused by quantitative trait loci (QTL). The increase in phenolic compounds and structural discontinuities caused by tissue collapse, in response to fungal toxins, explain the substantial decrease in reflectance of QTL leaves. Using HSI in the SWIR-range, a variety of micro- and macronutrients such as nitrogen, magnesium, sodium, iron, or copper could also be identified in corn and soybeans undergoing water stress (Pandey et al. 2017). In addition, HSI can be extended to all parts of the plant. For example, Alisaac et al. (2019) showed that HSI of wheat spikelets infected by Fusarium head blight allows for the identification of mycotoxins which was confirmed by quantification of fungal DNA.

Importantly, HSI comes with numerous advantages compared to classical visual monitoring or other analytical methods. It can be applied at different scales—from the cellular level for investigating plant tissue in combination with microscopes, over the individual plant scale in greenhouses or climate chambers, to the canopy scale in field applications with cameras mounted on unmanned aerial vehicles or airplanes (Bohnenkamp et al. 2019a, b, 2021; Heim et al. 2019a). However, in all cases, the analysis of HSI data must be done with care as a great complexity results from a triangular relationship between sensor, pathogen, and environment (Fig. 1). This relationship is further complicated by large amounts of often co-linear data (Thomas et al. 2018a). The effective analysis and interpretation of hyperspectral data are limiting factors for an implementation into plant phenotyping or precision agriculture Mahlein et al. (2018). Automated analysis pipelines and sophisticated data mining and machine learning approaches are necessary to “uncover the spectral language of plants”, as it was shown by Wahabzada et al., (2016).

Data handling and machine learning

Once imaging has been completed, a data analysis pipelines must be developed and implemented to ensure retrieval of meaningful information. In Fig. 2, we are presenting a workflow diagram that proposes a potential new multidisciplinary workflow for digital plant pathology research. Several requirements are prerequisites and different subsequent or parallel steps are necessary. After successfully measuring plant data, preprocessing needs to be performed. Steps like de-noising, smoothing, calibration, image segmentation, and outlier removing must be added to transfer the image data to features that can be used as input for machine learning routines (Paulus and Mahlein 2020, Behmann et al. 2015). Literature shows not only the importance of this step but also the huge effort that is required for different sensors in greenhouses and in the field (Bohnenkamp et al. 2021, 2019a, b; Thomas et al. 2018b). Hyperspectral measurements either use the raw reflectance signal as input or, for reduction of data complexity, vegetation indices like the NDVI or OSAVI (Bohnenkamp et al. 2019a, b; Rouse et al. 1974, Rondeaux 1996). Even though data complexity is reduced, vegetation indices retain their predictive power and can be used for phenotyping approaches with comparably low data input and subtle features. An example was shown for light leaf spot on oilseed rape plants (Veys et al. 2019). Multispectral approaches in the field can be used in similar ways but usually require a much higher effort for registration and data calibration due to the large area of interest and the fact that the environmental conditions are changing during capturing (Tmušić et al. 2020). For such circumstances, it is discussed how 3D imaging can provide necessary information for data calibration (Paulus 2020; Paulus et al. 2014).

Machine learning provides approaches to give meaning to the data. Supervised learning is used to train a classifier to separate different classes of infection or diseases (Rumpf et al. 2010). Therefore, a labeled dataset to train the model is essential. Commonly, the labeled dataset is split into three different subsets including a training set, a validation set, and a test set. The training set is used to generate a model, the validation set is used to validate it and to perform a fine tuning, and the final test set is then used to calculate various accuracy and error metrics. Comparable to conventional data analysis methods where rules are postulated to analyze the data, machine learning enables to learn these rules by the above-mentioned training process. Although data labelling might require intense manual work, these methods enjoy great popularity in plant science.

A type of machine learning algorithm, the neural network, has been rediscovered during the last decade. Invented during the early 1940s (McCulloch and Pitts 1943), this machine learning approach became only popular later with the development of high computational power. Neural networks use the captured images for segmentation, classification, or regression tasks (Barbedo 2021). It is to consider that the data, which is used to train these algorithms, must be of high quality and in a proper quantity to realize results with a high accuracy and low aberration.

A deeper insight into the importance of the input variables is also enabled by further supervised machine learning. Adapted algorithms like Boruta or Recursive Feature Elimination (Chen et al. 2020) provide an importance rating for the machine learning features. When used on hyperspectral plant disease data, these techniques can reveal spectral regions of important wavelength for identifying infected plants (Brugger et al. 2021). In contrast to supervised machine learning methods, unsupervised methods do not need any labeled data or data splitting. These clustering approaches like k-means or hierarchical clustering combine data of similar features and thus give semantic to the data by finding patterns of similarity (Wahabzada et al. 2015). However, results are hard to interpret and need then labelled data to be evaluated. Yet, these routines can be used to find groups of similarities, which have not been noticed before.

At this point, biological insight is needed to connect the output of the machine learning methods to plant- and infection processes (Fig. 2). Recently, approaches integrated expert knowledge as active learning processes in the analysis pipeline, this resulted in significantly improved quality and interpretability of machine learning outputs (Schramowski et al. 2020). To exploit such sophisticated data-driven approaches for real applications by agricultural experts, the models must be biological interpretable, which are now known as “white-box machine learning algorithms” (Fig. 1). They earned this name by aiming at being more explainable and transparent for users interested in the underlying cause for algorithmic outputs. The opposite would be the previous type of algorithms known as “black-box algorithms”. Latest developments show publicly available software libraries, such as the caret package (Kuhn 2008), Keras or Tensorflow (Géron 2019). Nevertheless, these models are only powerful through the underlying training data and rely on high-quality annotated data.

Fig. 2
figure 2

The workflow for the interpretation of sensor data using machine learning and linking it to biological processes, using supervised learning and feature importance methods, is shown. Adding the biological knowledge to the interpretation of features would allow for a more mechanistic and transparent machine learning approach as is currently the case. Each step in the process is often performed by a single expert. Thus, detailed knowledge of methods—especially in machine learning—is often not available. An approach involving experts from multiple disciplines would improve current workflows

Digitalization in agricultural practice: are robots the better farmer?

Since the last turn of the millenium, researchers gained confidence in deploying unmanned terrestrial and aerial vehicles (Fig. 1). These could be equipped with reflectance-based sensors for disease detection with enhanced spatial resolutions allowing for better discrimination between biotic and abiotic stress. Some systems reached a work rate of 3 ha/h (West 2003). Still, variations in illumination intensity, sun/sensor orientation, and/or background soil reflection were impairing consistent and high-quality data retrieval. Another problem turned out to be soil dust leading to detection errors and physical damage to the crops through the vehicle itself. Nowadays, automatization, mechatronics, sensors, electrical engineering and artificial intelligence have reached a level that enables a high degree of autonomy for mobile platforms such as drones, cars, and robots (see Fig. 1 “achievements”). In agriculture, autonomous robots, equipped with sophisticated sensor systems, are the next digitalization step for precise fertilization, pesticide spot-spraying and automated mechanical weeding. Automated robotic applications might even offer an alternative for overcoming shortages of human workers, especially for labor-intensive tasks such as harvesting vegetables or manual weeding (Lowenberg-Deboer et al. 2020).

Furthermore, the implementation of automated systems re-designed agricultural production by considering spatial heterogeneities of plant pest distribution or input parameters such as nutrients, water, and agrochemicals (Saiz-Rubio and Rovira-Más 2020; Wegener et al. 2019). The development of robotic applications for crop management differs with respect to the crop type and cultivation system. One example is the usage of UAVs in the field which are releasing Trichogramma brassicae, a natural enemy against Ostrinia nubilalis (European corn borer), as a biological control in corn plants (Zhan et al. 2021). In contrast to the manual application of “Trichogramma bags”, UAVs enable a fast and practical application in open land.

In the greenhouse, higher levels of automation, such as robotic harvesting of e.g., pepper (Arad et al. 2020) or robotic plant protection measures in tomatoes (Rincón et al. 2020; Cantelli et al. 2019), are already implemented. Field crops bring a variation of challenges as they can be randomly distributed (e.g., cereals) or planted in rows (e.g., corn, sugar beet, cauliflower). A more and more frequently used application is the selective removal of weed within and between crop rows (Bakker et al. 2010) using actuators such as mechanical weeding tools, laser, stampers or milling heads.

Prototypes of these weeding robots have raised public awareness during the COVID-19 pandemic when trained workers for manual weeding were not available (Mitaritonna and Ragot 2020). A fast development can be seen for weeding robots, in particular for row crops. These robots are commercially available and can be equipped to deal with different working concepts. The first concept depends on a highly accurate GPS positioning of the seed pill (Griepentrog et al. 2006). Precise seeding with just a minimal error is the prerequisite for orientation and an automated weeding. The robots use the weeding tools on the complete field, except for the area around the planted seed. The second concept is independent of the seeding step. Using digital cameras and an adapted vision recognition system mostly depending on neural networks and a huge underlying training dataset, the robot is able to detect the crop rows and to adapt its position, heading and navigation path. Furthermore, the weeding tools can be positioned in between or across rows (Machleb et al. 2020).

Position-based systems need a highly precise GPS signal mostly coupled to a real-time-kinematic approach (RTK) which needs to be booked at local suppliers. Without this system, a proper operation is not possible. The system is based on pre-learned positions of seeds and is not aware of changes within the plant population. It does not detect if seeds do not germinate, were eaten by animals or rolled to another position during the sewing process. The robot will continue weeding around these spots or uses its tools where it assumes weeds regardless of the actual presence of the seed or seedling. Nevertheless, position-based systems are robust and operate independent of pre-learned image datasets. They need the data of the sowing process, which is commonly performed by specialist machines, which can do this step at high speed for many rows at the same time.

Sensor-based systems can operate on different types of fields independent of the GPS position and the field structure. Camera images were analyzed in an adapted image processing pipeline. Here, the systems need to separate between vegetation and soil and in a second step between crop and weed. Therefore, a machine learning model based on a neural net approach is used which needs to be trained beforehand on datasets with the same crop under various environmental conditions (Bawden et al. 2017). The bigger this training dataset, the better the segmentation of vegetation and soil, or crop and weed, respectively (Baretto et al. 2021). By extending this dataset, the robot can be adapted for usage on different crops.

While the position-based approach is hard to extend to further aspects but easy to adapt on different crops, the sensor-based approach can be extended to aspects of the adapted treatment of different weed types. Furthermore, the generation of weed maps to distinguish different types of weeds, maps for plant properties like biomass, etc. can be extracted from the sensor-based algorithm which can in a later step be used for adapted control of bigger machines in the field. These maps are currently mainly performed by UAVs (Stroppiana et al. 2018). This concept could be a new basis for subtle disease detection in the field (Görlich et al. 2021). Robots have been shown to be able to adopt important tasks currently performed by trained workers or the farmer itself. Nevertheless, its deployment still is not fully autonomously productive and needs surveillance and a well-designed application scenario where field requirements must be adapted to robots which makes an extensive use difficult (Albiero et al. 2022). Currently, these robots show promising technology but evaluation studies to quantify the weed effect, the area efficiency, or limitations due to soil properties, climate and environmental factors are still not available. Future development will show that parts of the daily farm work will be done by robots in a way that is different from concepts today. The research outlook and motivation will still be to develop an “All-In-One” farm robot, which combines all necessary tasks from seeding, field management, plant protection, and harvest (Fig. 1).

A similar outlook for farmers is given for spaceborne monitoring since the European Space Agency’s (ESA) Copernicus program launched their satellites SENTINEL-2A in June 2015 and SENTINEL-2B in March 2017. Besides environmental monitoring and vegetation observation, they enable the monitoring of crop diseases and pests. The SENTINEL satellite sensors have a sophisticated resolution of up to 10 m per pixel and a spectral range from 442–2200 nm with a resolution of 12 spectral bands. Free data access is possible using different commercial software as well as with no-charge browser solutions like the EO Browser by the ESA (https://www.sentinel-hub.com). For some plant diseases and pests, the image time span is critical and short-term applications in the field cannot be conducted, which is currently the main drawback of the free satellite data available. This is because the image frequency depends on the revisit frequency of each single SENTINEL-2 satellite, which is 10 days and in the combined constellation 5 days. In addition, cloud cover might block the field of interest during the imaging. Nevertheless, for retrospective field assessments, as well as research investigations and plant breeding processes, spectral images from satellites are a real benefit to map landscapes with relevant crop and cultivation parameters, identify vulnerable spots, assess the vegetation period or conduct measures for future precision field management (Silva 2021, Segarra et al. 2020). Future satellite programs such as Landsat NeXt will likely unlock new applications and research directions (National Aeronautics and Space Administration, 2021).

One of these applications could be an extension of projects trying to improve the protection of water bodies from unwanted plant protection chemical run-off. Farmers of Germany and Norway have already access to the H2Ot-Spotmanager (http://synops.julius-kuehn.de). It calculates the environmental risk for waterbodies and their living organisms based on updated satellite, weather, and chemical data. Such applications show the manifold opportunities of satellite data even with a resolution that cannot represent a single plant. Plenty of commercial field management programs which use satellite data are already available (e.g., Xarvio Field Manager, 365 FarmNet, FarmERP, Farmlogs, Agworld, AgriWebb). In these programs, farmers give access to their field data or their whole field index. These data are combined with weather and satellite data to give the farmer a complete overview and information (e.g., about plant nutrition, water status, plant health status, and necessary protection) around the growth of their crop. Robots and satellite applications entail a large potential for plant protection as these machines integrate optical sensors for monitoring, highly trained and efficient models for detection and they are able to carry different actuators for adapted applications in the field (Balafoutis et al. 2020). In combination with weather-based forecasting systems and information platforms/databases (e.g., EPPO Global Database, ISIP, Animal and Plant Health Service (USDA)) machines could be trained soon to generate computer-based solutions and consulting before and during the vegetation season.

Outlook

As highlighted in this review, the achievements in digital plant pathology are great, but the potential is even greater. To exploit the full potential, the state-of-the-art must be regularly questioned while new challenges need to be defined and solved (Fig. 1). Because pathosystems can be very specific and complex, existing techniques must be critically evaluated and calibrated according to each pathosystems details. Generalized frameworks and models are necessary, which are intuitive and accessible for the farmer. To develop generalized models, a global database with spectral disease and plant spectra, could be a great foundation. An example of such a database from another field is the TRY plant trait database (www.try-db.org; Kattge et al. 2020). Challenges of such a collection of spectral data could involve having a standardized approach to clean and upload data (Paulus and Mahlein 2020). Access to the database must be simple, contributions should be acknowledged, data storage must be sustainable, and data curation for years or decades should be funded. Also, linking sensor type, ambient conditions and other necessary metadata to the uploaded dataset should be a requirement. Currently, many publications are presenting analysis pipelines on few, isolated databases (e.g., Plant Village Data; https://www.kaggle.com/emmarex/plantdisease) that have no relation to the complex situation experience in the field. Often, algorithms are not new, and the biological interpretation is missing. This, however, should be the prerequisite for novel publications in the realm of digital plant pathology. Interlocking the complex aspects of phytopathology, sensors, and machine learning is needed. A global database could help to capture and disentangle this complexity.

Unfortunately, concepts using optical sensors for plant disease detection in the field are not yet established, or are still in its infancy, for them to be integrated into decision-supporting solutions. While many calls exist for conducting interdisciplinary research to solve the remaining and persistent challenges, guidance in the form of funding or academic positions for this type of research rarely exists (Heim et al. 2019b; Bock et al. 2020; Brown et al. 2015). Therefore, we suggest the following research and action steps support the development and application of digital plant pathology in the field:

  • Conferences for the development of an international spectral database (ISD) of the global main crops.

  • Obligation to provide (image) data for publication and inclusion into the ISD.

  • Investigation of the influence of abiotic factors on collected data.

  • A standardized framework for the collection of remote sensing data, including metadata on ambient and sensor conditions, and sufficient ground reference data.

  • Investigation into scale independence of spectral information.

  • Enable machine communication (sensor, platform, computer, analysis software) on a common software basis.

  • Proof of concept for field applications based on ecological and economic standards.

Digital plant pathology, as well as the whole digitalization of agriculture, will change farmers' identity, skills, and work (Klerkx et al. 2019; Zolkin et al. 2021). To effectively implement digital technologies in practical agriculture, educated and trained farmers, as well as local consultants, with a commitment to new digital technologies, are required. Until this adaptation happens, these technologies will only become available to farmers via companies, start-ups, or the advisory services. Independent of the transfer from research to application, the common goal must be highly precise plant protection measures and higher performativity without affecting the sustainability of the natural environment.