End-to-end Precision Agriculture UAV-Based Functionalities Tailored to Field Characteristics

This paper presents a novel, low-cost, user-friendly Precision Agriculture platform that attempts to alleviate the drawbacks of limited battery life by carefully designing missions tailored to each field’s specific, time-changing characteristics. The proposed system is capable of designing coverage missions for any type of UAV, integrating field characteristics into the resulting trajectory, such as irregular field shape and obstacles. The collected images are automatically processed to create detailed orthomosaics of the field and extract the corresponding vegetation indices. A novel mechanism is then introduced that automatically extracts possible problematic areas of the field and subsequently designs a follow-up UAV mission to acquire extra information on these regions. The toolchain is finished by using a deep learning module that was made just for finding weeds in the close-examination flight. For the development of such a deep-learning module, a new weed dataset from the UAV’s perspective, which is publicly available for download, was collected and annotated. All the above functionalities are enclosed in an open-source, end-to-end platform, named Cognitional Operations of micro Flying vehicles (CoFly). The effectiveness of the proposed system was tested and validated with extensive experimentation in agricultural fields with cotton in Larissa, Greece during two different crop sessions.

annually around 20%-40% of any agricultural region is lost due to pests and diseases [21]. This cultivation regional loss along with the constant need for increased productivity leads to excessive use of pesticides that clearly threatens human health and pollutes the environment.
The worldwide prevalence of the aforementioned issues emerged a call for immediate action and improvement in the agricultural sector [26], aiming to develop more effective vegetative treatment strategies for enhancing the quality and quantity of crop yields. Towards this direction, utilizing remote sensing technology holds the promise of a feasible solution to the aforementioned challenges. Processing of sensory data based on artificial intelligence and visual analysis can provide propitious, so-called, Precision Agriculture (PA) techniques for monitoring food, fabric, wood, etc. production. These PA techniques focus on building systems for automation of important farming operations such as monitoring the crop's growth and performance, assessing its fertilizer rate and water level, etc. To achieve that, data acquisition from the agricultural fields can provide information to the farmers regarding crop diseases, pest infestations, water stress, and other relevant issues that affect crop productivity. By further processing the acquired sensory data, PA systems aim to correlate production techniques with early crop management decisions to reduce agrochemical use and manage natural resources more efficiently while boosting overall productivity.
However, particular challenges need to be met to allow the uptake of innovative technologies in smart farming to reach its promise [43]. The main challenge when combining innovative technologies in precision agriculture is the standardizing technology and the data synchronization of the involved assets. Operating standards across different technologies may differ as many commercial or noncommercial services adopted in the agricultural sector, have been developed based on a completely different conceptual approach and are intended to operate individually and only for a specific task (e.g., mission planning, image processing, etc.). This lack of interoperability causes monotonous and time-consuming work for the operators since they have to manually feed the output data from one subsystem to another. Another challenging task is the data management plan. Even a small farm has thousands of data from a crop session where information must be gathered to estimate and assess its performance. Attempting to process and understand these data, without a field service management system to store them, whether that is daily, monthly, or seasonally, can be overwhelming for both agriculturalists and farmers. Last but not least, the lack of scalability of many robot-related services, which are designed only for scanning operations, are sure to end up with limited energy autonomy while operating across hundreds of acres [9]. For precision agriculture to take hold, any technological tool that is expected to assist farmers must be scalable to the size of their needs, expectation, and farming operations. Conceptually, any agricultural software which is integrating robotics and computer vision techniques must have a common holistic view of agricultural processes enhancing crop yield management decisions by assisting even nonexpert users in proper manipulations.
In this direction, utilizing intelligence devices such as Unmanned Aerial Vehicles (UAVs) have expanded the capabilities of PA techniques by enabling flexibility, wieldy and adjustable to the case requirements solutions. Based on that, integrated end-to-end platforms have been developed to automate the procedures of UAV navigation, data collection, and processing for knowledge extraction and decision support. Thanks to such platforms, commoncommercial UAVs have started to bring the agriculture production systems into economic and environmental benefits, making productivity in the primary sector more sustainable. In the following subsection, an overview of the important and recent related works regarding end-to-end platforms in the field of agriculture is presented.

Related Work
Recent technological developments, a scale-up in sensor and UAV production with a simultaneous decrease in cost, have given rise to a booming startup industry. While in the beginning applications were limited to military use, soon enough a large number of industrial and civilian use cases emerged, one of them being precision agriculture.
According to a recent report [11], the agriculture drone market will be worth $5.19 Billion by 2025, growing at an impressive Compound Annual Growth Rate (CAGR) of 31.1% from 2019. An overview and comparison among the most significant commercial agricultural products in terms of their farming technological features can be seen in the upper section of Table 1. Pix4D [35], DroneDeploy [16], and Sentera FieldAgent [5] are commercial drone mapping software which incorporate digital farming technologies, emerging at precision agriculture and farming development. They can produce UAV flight paths, capture aerial photographs, visualize health indices, as well as provide a timeline view of previous field scans for continuous monitoring purposes. Agisoft Metashape [3] is advertised as a product for creating 3D maps using photogrammetry, including crop yield and plant health maps. Botlink [12] is a UAV-based agricultural software with an integrated flight planning framework that can provide high-definition 2-D and 3-D outputs as well as vegetation indices. While Blue River technology [47] is not a UAV technology company, but rather a provider of smart farm machines to manage crops at a plant level, their product is mention-worthy, as it uses computer vision and deep learning techniques to Table 1 Competitiveness matrix listing commercially available products and research works, including this work (CoFly), comparing them with respect to different features monitor each plant in the field individually, a feature which integrated UAV products are currently lacking.
Research-wise, a large number of UAV-based applications have been developed, for a variety of agricultural practices, such as crop health and yield monitoring (second half of Table 1), making up the majority of agricultural applications due to the ease of use, portability, and economic viability compared to available commercial products, while others incorporate crop spraying techniques [19], [20], which are rather limited owing to the lower payloads that most autonomous UAVs can carry. According to a recent review paper [39], most of these developed systems use commercial software for 3D photogrammetry, visualization, and health index calculation (Pix4dMapper [35], Agisoft Metashape [3]). In terms of contribution, these works mostly focus on improving specific workflows; for example, estimating the overall plant volume and the soil surface for specific crop parcels [15]; monitoring and recording the reflectance of the vegetation canopy [37]; estimation of chlorophyll levels in rice paddies by evaluating vegetation indices derived by hyperspectral data [48]; inspecting periodically the status of crops while obtaining multiple images of them and calculating various indices, as well as pH level and acidity [49]; evaluating the capabilities of a hyperspectral camera in identifying the nitrogen status in rice crops [53]; monitoring and mapping vineyards as well as identification of discontinuous crop rows based on image stitching and health index computation [29].
From a motion planning view, a great majority of bespoke systems use ROS framework [38], while others use Ardupilot Mission Planner software [8], Pix4DCapture [35] or any other software compatible with the drone. While the aforementioned UAV flight planning software is broadly used and open-source, they do not offer capabilities that fit each field's characteristics such as no-fly zones and automatic UAV-based weed detection. A deeper look into available open-source UAV flight controllers and flight simulators [17], reveals that there are ample options for flight control and mission planning [4,6,30] however, none of these extend their capability beyond that to offer data post-processing capabilities.
Overall, several research works include remote sensing capabilities for a variety of UAV-based applications in precision agriculture. Despite their effectiveness and usefulness, the main drawback of these systems is the fact that are intended for specific tasks and not for a holistic view of agricultural processes. For example, some of them focus more on flight planning and health index calculation without having an integrated system including a friendly user interface that eases and automate the utilization of the UAVs [29,37,48,49]. Some others utilize separate commercial software for UAV planning in terms of data acquisition and crop health index computation, making their customization in agriculture applications difficult as they require some expertise in order to be consolidated and manageable as an overall system [15,53]. Besides, utilizing commercial products might be non-affordable as some of them establish a price according to daily or monthly use. Finally, almost none of the aforementioned commercial or research works incorporate no-fly zones inside the area of interest but support only generic coverage missions which are universally applied to all crop fields (no site-specific missions).

Contributions
In this work, we propose a holistic approach focusing in a high-level, user-friendly, and centralized platform developed to perform precision agriculture practices based on UAV functionalities, optimized for real-life use and specifically tailored to each field's characteristics. The proposed platform aims at enhancing the software-dependent system abilities in various robot-related research areas while the hardware requirements are kept at the most significant so that the cost could be kept at low levels.
In a nutshell, the overall workflow of the proposed platform is illustrated in Fig. 1  3. Site-specific mission: CoFly software classifies the available VI maps and estimates the center of the problematic areas which are considered essential for further processing and marked as Points of Interest. On top of that, an automated site-specific mission with low altitude focused on site-specific treatment and weed detection is designed to provide a more thorough inspection of these problematic locations.
Although several UAV platforms are available and capable of providing high-end, informative representations of the field, all of them are severely restricted by the hardware (mainly battery capabilities) of the deployed UAV, leading to limited energy autonomy and operational time.
CoFly system aims to alleviate this drawback by utilizing an extra automatic way of creating also precision UAV flights considering the extracted field-specific knowledge (site-specific mission). In other words, the idea of this added feature is to avoid an over-detailed scanning of the whole field, but rather to combine a quick overview with cognition capabilities to accommodate solutions and alleviate the drawbacks of the UAV's limited battery life while operating across hundreds of acres improving, thus, the operating time and energy efficiency of the monitoring process. More specifically, the novelty of this paper lies in a holistic framework for precision agriculture integrating the following unique characteristics and custom software solutions: -Low-cost deployment by using off-the-shelf drones, year's harvest.
The proposed platform aims at providing the operator/farmer with a cost-effective and end-to-end integrated system to accomplish autonomous tasks that were used to be completed manually (e.g. crop growth monitoring). Besides, it will play a supportive role in decision-making objectives (e.g. possible weed infestations) enhancing crop yield management decisions by assisting non-expert users in proper manipulations. A comparison of the main features, of all available platforms, as long as the advantages of the CoFly approach are presented in Table 1.
To the best of our knowledge, CoFly is the first to integrate (i) mission planning supporting no-fly-zones and obstacles, (ii) plant health index calculation-visualization, (iii) weed detection using deep learning, with (iv) a user-friendly and intuitive graphical user interface. All of the aforementioned services are build upon modules of enhanced and properly modified well-established methodologies, providing unique characteristics toward a fully functional end-to-end field service management system.
Our work is an open source alternative, directly comparable to commercially available platforms, aiming to maximize the usage of PA by non experienced in UAV flight end-users. At the same time, we offer the research community a tool to develop and test custom solutions for specific agricultural applications. The overall CoFly ecosystem is open-source and publicly available 1 to the community.

Paper Structure
The rest of the paper is organized as follows. Section 2 presents the translation of a general coverage motion planning framework to an optimization problem and describes the developed algorithm for tackling such a problem. In this section, a simulated evaluation is performed in different scenarios to adequately analyze the performance of the developed algorithm. Section 3 presents the proposed methodologies for extracting vegetative knowledge regarding the crop's health status along with a pixel-wise pipeline for detecting possible problematic locations. Section 4 describes the proposed motion planning algorithm for dealing with the site-specific inspection and presents the weed detection module by using deep learning methods. Section 5 presents an overview of the graphical user interface and displays the overall workflow for the system's integration. Section 6 includes the reallife experiments along with the visual results and finally, in Section 7, concluding remarks are given.

Problem Formulation
For the purpose of covering a continuous area, it is essential to deploy a method that provides safe and efficient paths while ensuring complete coverage of the agricultural field. To efficiently cope with challenges faced in real-world scenarios, one should analyze all the aspects that govern Coverage Path Planning (CPP) problems [24]. With the term CPP, we refer to the task of determining a path that passes through all the points of an area while avoiding any No-Fly Zones (NFZs)/Obstacles possible defined within the operational area.
In this paper, the proposed platform deals with the problem of designing a path given: A. Field-based info: -a user-defined polygon (A) which contains a series of points that define the agricultural field to be scanned, -a user-defined sub-polygon (O), if needed, representing the NFZs/Obstacles within A.

B. UAV-based info:
-the scanning density, in meters, representing the distance between two adjacent flight path lines, -the flight direction, in degrees, defining the direction of the path within A, -the flight altitude of the UAV for the mission, -the speed of the UAV for the mission.
The objective is to compile all this user-defined information and extract a set of coordinates, representing a safe and efficient path, so as to provide a complete coverage of the A. Note that the defined problem does not assume any type of UAV, but rather adapts to each UAV's capabilities by tuning the UAV-based info (as defined previously) accordingly.

Field Approximation on Grid
To begin with, let us assume that the agricultural field that needs to be covered has an arbitrary shape and is constrained within a two-dimensional polygon A defined by a set of coordinates (x, y). As a first step, A must be represented as a grid. For the grid approximation, A is decomposed into a finite set of equal cells which do not overlap while filling the entire area of A. The number of these cells represents the required spatial resolution of the UAV's collected images for monitoring crop growth. It should be noted that in the proposed platform, the spatial level resolution can be specified through the values of the flight altitude and scanning density.
The grid representation of A is described as follows: where rows, cols enumerate the rows and columns after the grid discretization of the terrain to be covered. Apparently, the size of the grid U is given by n = rows × cols.
During cellular decomposition, we also assume that there are n o obstacles at the size of a cell placed in several known positions. The set of these obstacles are described as follows: where each cell that corresponds to any obstructed area on the actual field is marked as occupied. Evidently, the number of cells that need to be covered are reduced to l = nn o Therefore, the set of unoccupied cells that should be covered is described as follows:

Nodes Placement
Having defined the operational area U , we proceed by defining the allowed UAV movements, from each grid cell within L, to fully cover an agricultural field A. To achieve that, an undirected weighted graph G = (V , E) is introduced. Each vertex V ∈ G is placed in the center of each grid cell of set L and represented as a node. The edges among these nodes denote the allowed movements of the UAV when navigating inside L. Generally, each node is connected with four adjacent nodes following the Von Neumann neighborhood [36]. Nodes that are placed on the boundaries of the operational area or are adjacent to an obstacle have less edges.
Definition 1 Two nodes u (x i ,y i ) , ν (x j ,y j ) ∈ G are consider to be adjacent if: where s d defines the value of scanning density in meters.
Definition 2 A valid path of length m is considered any sequence of nodes In other words, the set of (V , E) ∈ G within L provides us with information corresponding to a sequence of adjacent nodes allocated in the center of the unoccupied cells. Consequently, if a series of nodes follows Def. 2 could constitute a valid path that passes through all the points of U while avoiding obstacles. Hence, G to provide a valid path specifically tailored to each field's characteristics is also directly related to the characteristic hyperparameter flight direction. Modifying the flight direction feature intends to optimize the node's placement inside U to create a more favorable -in terms of coverage-representation of G.
The key idea is the rotation of each node within the grid U by an angle ϑ in a counter-clockwise direction around a center-point. Given the desired angle ϑ determined by the flight direction value, the rotation of each node is defined by the following equations: where the following constraints are hold: -A is defined in a two-dimensional space, -the center-point c x , c y is defined as the center of A, -(x, y) ∀ V ∈ G are in the form of Cartesian plane, -ϑ, which controls the rotation of G over U , is in range Modifying the flight's direction, there might appear more favorable configurations of G inside U , where the number of nodes within L is greater. Such configurations are more likely to provide paths that achieve a higher percentage of coverage for the given A. It must be emphasized that the x new , y new variables are rotating with respect to the user's desired scanning density.

Optimization Problem
Given the mathematical description presented above, the problem of finding a minimum size of a collection of edges (UAV movements) that cover all the vertices of the graph can be translated into the minimization of the covering path X , so as, where -m is a positive integer which denotes the cardinality of set X , -l ∈ L is the number of unoccupied cells, -E is a set of edges-movements, -X is a valid path.

Navigation Algorithm
Having illustrated all the prime features that constitute a completely safe and efficient path, we proceed to present a navigation algorithm in steps, as sketched in Fig. 2, for solving the optimization problem of (8). The navigation system that we propose in this paper, utilizes the Spanning Tree Algorithm [23] which deploys an off-line coverage path planning algorithm as the UAV covers any arbitrary shape, while knowing in advance any related information about the location of the NFZs/Obstacles. Conceptually, this algorithm along with the equations of Section 2.1 tackles the aforementioned challenges and achieves the overall mission objectives within five distinct steps.
Algorithm 1 2 outlines in pseudo-code the key steps of the proposed navigation algorithm.
Step 1. As a first step, in lines 1 and 2, the user's input data A and O are transformed from WGS84-EPSG:4326 into WGS84-EPSG:3857 [44]. For ease of understanding, let us analyze the usage of this transformation.
To start with, EPSG standard codes are used as Spatial Reference System Identifiers for map datasets. EPSG:4326 projects the map dataset in a coordinate system on the surface of a sphere while EPSG:3857 projects the map dataset in a coordinate system on the surface of a square. The reason why this transformation is needed is because A and O are initially defined in a set of latitude/longitude pairs while scanning density is defined in meters. In latitude/longitude (EPSG:4326), distances are not measured in meters, but rather in decimal degrees. To efficiently deal with these challenges, a Web-Pseudo Mercator projection (EPSG:3857) was used to project these latitude/longitude pairs in a square shape which supports a metric system in meters. This suitable projection was used for dealing with the above aforementioned mathematical descriptions of Section 2.1, extract the path points, and project them back to latitude/longitude pairs.
Having transformed the input coordinates A, O to a Cartesian Plane shape, in lines 3-10, the entire area of A is grouped into large square-shaped cells, each of which is either entirely blocked or entirely unblocked, as shown in Fig. 2(a). It must be noted, 2 https://github.com/CoFly-Project/Waypoint-Trajectory-Planning as the only algorithm's requirement, that the areas which include stationary obstacles within the grid cannot be smaller than a large square-shaped cell.
Step 2. In lines 11-21, after the discretization of the field, every unoccupied large square-shaped cell is translated into a node and for each pair of nodes, a number (the weight) is assigned to each edge. These nodes-edges, regulated by the flight direction feature, form the graph G = (V , E) which defines the allowed movements of the UAV ( Fig. 2(b)).
Step 3. As a next step (line 23), a graph minimization methodology by using Kruskal's algorithm [32] is applied in the previously formed graph G as illustrated in Fig. 2(c). The resulting minimum spanning tree (MST) contains the minimum number of edges among all nodes (allowed movements) so that G remains fully connected. Figure 2(d) graphically depicts the resulting MST above the original discrete area of Fig. 2(b).  (Fig. 2(f)).
Step 5. Once Algorithm 1 determines a path X , in line 27, a smoothing hyperparameter called corner radius is introduced and applied in turns. Figure 3 illustrates the path smoothing process with the black line indicating the path X generated in Step 4 while the blue line indicates an optimal X -in terms of covering time-solution after the applied smoothing process.
Having defined the navigation algorithm, let us provide some necessary preliminaries about the flight hyperparameters (flight altitude, scanning density, flight direction, corner radius, speed) and how they affect the overall quality of aerial mapping. An important analysis of those hyperparameters is analyzed below: Flight altitude. Low flight altitudes yield the highest resolution and best precision in mapping with more matched features per image. In contrast, flying at a higher altitude covers more space and allows the capture of common unique features in multiple images which can be useful when mapping areas with homogeneous imagery. Consequently, flight altitude determines the overlap between two corresponding images in the same flight path line as sketched in Fig. 4(a) (frontlap). Scanning density. In the proposed CPP method, the distance between two sequential trajectories is determined by the value of scanning density. As a result, this distance can control the overlap between images from two adjacent flight path lines as sketched in Fig. 4(b) (sidelap). Flight direction. Using the flight direction feature allows you to change the direction that your drone flies and ensure that the extracted path pattern is tailored to the field. Figure 5 illustrates an example where the flight direction feature is applied to a specific complex-shaped polygon. Adjusting the aforementioned hyperparameters to the crop yield can lead to an optimized setup by generating a path pattern tailored to each field's characteristics and optimize the chance of generating high-resolution maps.

Simulated Evaluation
To assess the performance of the proposed navigation algorithm to the quantity of coverage and the overall quality The metric P oC intends to represent the percentage of coverage directly matched to a path X within A. Given the problem definition, P oC is defined as follows: where n A denotes the number of nodes placed inside a polygon A which will be used for the MST construction and n A max denotes the theoretically maximum number of nodes that could fit inside a given polygon A. Thus given A and scanning density, n A max is formulated as follows: where A is defined in m 2 and scanning density is defined in meters in the x-axis and the y-axis, respectively (Algorithm 1, lines 7 & 9). The second metric EF T intends to represent the estimation time, in minutes, for the UAV to follow a path. As to covering time, the EF T is determined by the simulation time.
RotorS UAV [22] simulator framework was utilized to test and evaluate the proposed navigation algorithm. RotorS is an open-source, simulator for unmanned aerial vehicles based on the Robot Operating System (ROS) and the Gazebo 3D robotics simulator. All the experiments were carried out in the waypoint-navigator package [34] of the aforementioned simulator, which is a high-level waypointfollowing package that simulates a trajectory through a list of waypoints formatted in WGS84-EPSG:4326. For the representation and visualization of the trajectories we used the RViz (Robot Visualization) program [45], an auxiliary tool for visualizing our waypoint data in a real domain. RotorS was hosted in a common laptop (Intel Core i7-8750H CPU) with Ubuntu 16.04 LTS and a combination of v7.7/Kinetic as a version of Gazebo and ROS package, respectively.
For the evaluation of the proposed method, a set of 6 generated polygons is used. This set includes complexshaped and non complex-shaped polygons, with or without obstacles. The simulations were carried out with the default UAV model of RotorS equipped with GPS and IMU sensors. Moreover, it is assumed that the UAV is initially located at (Lat 1 , Lon 1 ) of each simulated trajectory X . For all sets, the proposed method was executed with fixed flight altitude and speed of 100 m and 3 m/s, respectively. Additional information associated to each scenario is presented in Table 2. It should be noted that each scenario was designed in such a way to demonstrate the performance of Algorithm 1 and test the effect of the flight direction and corner radius hyperparameters on the resulting path in terms of coverage percentage and time. The rest of the hyperparameters were not evaluated in RotorS as they do not directly affect the path pattern but rather the image overlap settings.
For each scenario, Algorithm 1 was employed to calculate the respective UAV path to solve optimization problem (8) aiming at maximizing the P oC while minimizing the EF T . The visualization of the produced trajectories X for each scenario along with their corresponding performance is presented in Fig. 6 and the last two columns of Table 2.
The results of the evaluation procedure are reported and analyzed in the following paragraphs.
In Scenario #1, 31 nodes with flight direction = 0, are selected for the MST construction as important regions of interest for the UAV. In Scenario #2, by only modifying the flight's direction, the navigation algorithm finds a more profitable -in terms of coverage-configuration of U inside A with 32 nodes selected as important regions of interest. As presented in the last column of Table 2, the resulting path X of Scenario #2 in comparison to Scenario #1 is more optimal in terms of coverage. However, due to the increasing number of waypoints, the flight time of Scenario #2 is greater than Scenario #1. Hence, Scenario #3 is considered to demonstrate the effectiveness of smoothing the trajectory in turns. As presented in the EF T column of Table 2, by tuning the corner radius, the overall flight time of the resulting path X of Scenario #3 is decreasing in contrast to Scenario #2, providing thus the exact percentage of coverage. While Scenarios #1,#2 and #3 are Overall, the developed algorithm designs the UAV paths, in such a way as to achieve high P oC both in complex and non-complex fields while keeping the flight time as minimum as possible, enabling the prolonged utilization of the UAV in the agricultural application.

Location Estimates for Crop Health Monitoring
To maximize the system's usability and incorporate novel and cognitive functionalities, the framework involves sequentially executed processes that eventually improve the crop monitoring process. After the execution of the generic coverage mission presented in Section 2, a set of RGB georeferenced images covering the whole field has been collected. This set comprises the base of the subsequent processing procedures aiming at i) extracting knowledge regarding the crop's health status and ii) providing identified locations over the field that correspond to the centers of detected problematic areas. The following sub-sections analyze the integrated procedures.

Image Stitching Process
The aim of the current module is to merge the collected visual data and provide an orthomosaic as an accurate map of the examined area. Towards this direction an appropriate tool for image stitching has been deployed which is based on the following process: 1. Firstly, the telemetry from the metadata of each image is extracted in order to speed up and improve the accuracy of the stitching process. 2. Then, a sparse point cloud is generated with the Open Structure from Motion (OpenSfM) [2] library. 3. The generated point cloud is transformed to triangle meshes via the Poisson Surface Reconstruction [10] method. 4. The triangle meshes are textured over by selecting appropriate patches from the acquired images generating the georeferenced orthoimage.
The above process can be implemented with the OpenDroneMap (ODM) [1] toolkit. In Fig. 7 the generated orthophoto of an examined area is demonstrated. The outcome of the aforementioned stitching process constitutes a comprehensive map-quality representation of the field and can be further processed as a whole, to extract qualitative features, such as an estimation of the vegetation health, and provide the end-user an overview of the crop status.

Vegetation Indices (VIs)
To enhance the capacities of the proposed framework toward a cognitive precision agriculture system, the previously The use of a multi-spectral camera would provide accurate and more meaningful data for estimating plant health via the calculation of some pertinent VIs. Despite their widest exploitation and their increased performance in precision agriculture applications, deploying a multispectral sensor will significantly increase the total cost while technical restrictions might be inserted such as a considerably lower acquisition rate. On the other hand, employing off-the-shelf UAVs utilizing RGB cameras consists of a low-cost and easy-to-deploy solution. To this end, VIs focused on the visual spectrum were incorporated as the usage of a visual camera is more flexible and affordable. Therefore, multiple VIs were adopted and developed to cover the majority of aspects that are related to crop health. The employed RGB-based VIs are presented in Table 3.
Each one of the four selected VIs represents the actual reflectance of the field's vegetation in different color bands and thus, it can reflect different measures of crop health. More specifically, the Visual Atmospheric Resistance Index (VARI) [25], originally designed for satellite imagery, considers the blue light scattering as the blue spectrum travels in smaller wavelengths. On the contrary, Green Leaf Index (GLI) [33] was proposed to measure wheat cover and all main color bands are considered in a weighted approach. Similarly, Normalized Green-Red Difference Index (NGRDI) [27] is obtained by calculating the reflectance of the green and the red zone of the electromagnetic spectrum. A similar approach is adopted for the Normalized Green-Blue Difference Index (NGBDI) [52] nonetheless, the estimation relies on the reflectance of the green and the blue zone. These VIs were selected as an established choice capable to reflect and represent the under-study agriculture-related issues.
The estimated VI maps are defined within a limited range and displayed to the operator by utilizing a red-green colormap by using the appropriate scale, as demonstrated in Figs. 8(a)-(d). Since each calculated VI reflects different aspects of the visual light that is incorporated, the corresponding visualization for every VI is unique as different aspects of plant health are assumed.

Location Extraction of Problematic Areas
The extracted VIs image representations can provide the end-user, with a concrete overview of the crop's health. Yet, to automate the procedure, individual field regions that present poor conditions, in terms of vegetation health, have to be defined. This module aims to detect these problematic areas and feed their location to the site-specific mission module for a targeted, on-the-spot, inspection. The significance of VIs is not only their ability to visualize the health status of the crop but also the quantification of this status by assigning a single value to each pixel of the processed image, usually, in the range [−1, 1]. Based on that, a pixel-wise processing pipeline has been developed to detect these regions where the corresponding index value is low and thus can be considered problematic. The deployed module consists of the following processing steps:

Outlier detection 2. Binary thresholding 3. Connected components detection 4. Centers of mass calculation 5. Centers aggregation
Although the typical range of the presented VIs is [−1, 1], in real-world cases the actual range can differ significantly and values close to the typical extrema can be considered outliers. Towards this direction, initial filtering of the index array outliers takes place. At first, the histogram of the index array is calculated. Assuming that k 1 is the first histogram's bin corresponding to the interval [−1, −0.9), in case the frequency of this bin is smaller than 5% of N, where N is the number of the pixels corresponding to the entire field region, then the values belonging on the specific bin range are considered outliers and −0.9 is the actual minimum (notated as min) of the processed index array. The process is repeated for the next bin until the outlier criterion is not satisfied. A similar approach is applied to estimate the maximum value (max).
Next, binary thresholding is applied to separate the problematic areas from the healthy ones. As problematic areas are considered those with pixel values in the range [min, t], where t is the binary threshold equal to t = min + 0.25(max − min). Although it is established that t is equal to the quarter-half of the VI's actual range, one could fine-tune this value manually to achieve better, croporiented performance. In Fig. 9(a) the corresponding binary image acquired by thresholding the VARI index of Fig. 8(a) is depicted. Afterward, a connected components analysis is employed to detect connected components on the binary image that correspond to a single problematic area. For every detected area, the center of mass is calculated leading to a set of points, as presented in Fig. 9(b).
These center points are considered the destination points where the site-specific mission has to take place, to provide a more thorough inspection regarding the low health conditions of the specific area. Yet, a large number of waypoints is usually prohibitive, since the flight time of a conventional drone is not adequate to operate in all these individual areas. Furthermore, there are cases where neighboring problematic areas can be considered as parts of a wider area that can be examined with a single pass of the UAV asset. To overcome this issue, the k-Means clustering algorithm is employed, aiming to group the center points of smaller individual problematic regions and lead to a more manageable number of destination points. Focusing on an autonomous operation, the number of clusters k is selected automatically in the range [2, 10] based on the maximization of the Calinski-Harabasz score [13]. Furthermore, to acquire more meaningful results, a weighted clustering approach is selected, where the cluster centroids are calculated with respect to the size of each area. Figure 9(c) illustrates the outcome of the k-Means clustering. The acquired cluster centers are forwarded to the site-specific missions module, as the points of interest where a further on-spot examination will be conducted. Moreover, the ability to fine-tune or exclude destination points is provided to the user, to create a more tailored site-specific mission that meets his requirements.
As mentioned in Section 3.2, the employment of RGBbased indices aims to compose a low-cost and flexible framework. However, note that the presented approach for the identification of the problematic areas can be applied in the case of multispectral indices without any modifications and thus, it can be considered a "plug-and-play" solution.

Site-Specific Mission
Towards assessing the aforementioned problematic locations, a site-specific mission focused on the detected points of interest is designed. The main objective of this mission is to provide a more thorough inspection of these individual Fig. 9 Demonstration of the processing pipeline for the problematic areas identification. Initially the index array is (a) binary thresholded, then (b) connected component analysis is conducted to detect the center of mass of every problematic regions, finally (c) K-Means clustering is employed to estimate the centers of wider problematic areas spots of the examined area that seem to be problematic in terms of plant health and provide valuable information to the end-user. To efficiently monitor these individual spots, visual footage from lower altitudes and different camera angles is captured aiming to visually detect crop health deterioration sources, such as weed plants, to enable site-specific treatment.

Hot-Point Mission
From the motion planning perspective, visiting each of the individual points of interest can be deduced to the problem of finding the shortest path that visits all nodes (Fig. 9(c)).
At this point, the path planning problem is formulated into the optimal round trip that will guarantee the shortest possible route while passing through all points, only once. For individual coverage, we applied the Travelling Salesman algorithm (TSP) [7], which is renowned for finding the shortest path within an undirected graph. The pseudo-code for monitoring each χ i is given in Algorithm 2 3 .
Given the takeoff/landing point and the overall set χ, in lines 1-6, we treat the defined points as an undirected stationary graph representing each point within the examined area as a vertex. The distance between each pair of vertices is defined by the weight of their corresponding edge. In line 7, by solving the TSP algorithm an optimal path 3 https://github.com/CoFly-Project/Waypoint-Trajectory-Planning that visits every vertex is defined. In lines 9-10, every point -vertex of the resulted optimal path is labeled as H ot-P oint. As a next step (lines 11-15), we generate a set of m additional points with an angle ϑ ∈ [0 • , 360 • ] and a constant radius r around a specified H ot-P oint. Note that, the number of points m that enclose a H ot-P oint depends on an angle ϕ (line 13) which determines the angular distance between any two consecutive points on the perimeter of the circle. The smaller the value of ϕ, the greater the number of points on the perimeter of the circle. Figure 10(a) illustrates an indicative example of a circular flight that encircles a H ot-P oint.
Based on Algorithm 2, the UAV will perform a fixed radius cycle around the central points of interest (H ot-P oints) as it follows the resulted path of the TSP algorithm as sketched in Fig. 10(b). It should be highlighted that the resulting path starts and ends at a specific vertex (line 1) which denotes the takeoff/landing waypoint after having encountered all the other vertices of the graph, exactly once. Moreover and if needed, during the execution of the mission, the user can also use the physical remote control to modify the speed, heading, and altitude of the aircraft.

A Hot-Point Approximation Study on Flight Time for UAVs
This study aims to estimate the UAV's mission time and analyze its association with the number of points of interest. For the evaluation study, 10 χ sets are used. Every set includes a different number of distinct points of interest. For each set, 100 simulation experiments, with a random-shaped distribution of points of interest, were performed within an area of 10.77 acres (Fig. 10(b)). All sets were executed with a fixed radius r of 5 m and an angle ϕ = 45 • . The simulations were carried out with a fixed altitude and speed of 10 m and 3 m/s, respectively. The overall performance of Algorithm 2 in terms of covering time regarding each set Algorithm 2 Hot-point mission.
of χ is presented in Fig. 11.
According to Fig. 11, each of the sets χ indicates that the estimation coverage time varies and depends on the geographical location (latitude, longitude) of each of the points of interest within the field. Although, the key outcome of this study is that the developed algorithm, for each of the 100 simulations, achieves a persistent coverage of 3-30 points of interest within an estimation coverage time close to the average flight time of most common commercial UAVs, i.e. 25-30 min. Such a feature is of paramount importance, as it supports trajectories for covering a large number of individual spots while concurrently respecting the average energy efficiency of a conventional UAV. Note that, the Estimated Flight Time on the y-axis could be achievable only if the user does not intervene during the execution of the Hot-Point mission. However, when users use the remote controller to adjust the flight status of the UAV, covering time is increasing or decreasing proportionally to the duration of each action. As a result, the Hot-Point mission as a feasible approach for site-specific inspection can be applied to precision agriculture practices for automated visual footage focusing on low-cost smart farming by utilizing common commercial UAVs.

High Level Semantic Information Extraction
During the Hot-Point mission, a closer inspection by the end-user of the problematic field areas is enabled through the captured visual footage. Yet, artificial intelligence can be employed to automate this process and compose the basis of a decision support system. Towards this direction, a weed detection module is deployed to analyze the visual  Accurate weed detection is a crucial, yet challenging, task of precision agriculture in order to maintain the health of the overall crop and minimize the damage through selective treatment. Convolutional Neural Networks (CNNs) have reported remarkable results in a wide variety of computer vision tasks and especially in the fields of object detection [46] and semantic segmentation [42]. The proposed framework employs these techniques for the task of weed detection. In specific, the particular module is based on DeepLabv3+ [14], a robust deep-learning model for image semantic segmentation possessing stateof-the-art results on PASCAL VOC 2012 dataset [18]. Contrary to more naive approaches of image classification where the input image is classified as a weed or not, the semantic segmentation approach enables the detection of multiple instances and types of weeds by applying a pixel-wise annotation on the input and thus, provides accurate information regarding the location of the detected weeds. The developed module is capable of semantically segmenting weed instances depicted on input RGB images captured during the site-specific mission and thus, provides critical high-level information regarding the location of detected weeds to the end-user in order to schedule counteractions on the spot.

Dataset
It is important to note that weed detection is a quite challenging task due to the nature of the problem since the captured weed instances present a wide variety in terms of form, size, and shape. Moreover, the development of a meaningful dataset that can enclose the majority of different occasions is not an easy task, since it requires an extreme amount of man-hours and the corresponding workforce, as well as the collection of data over a wide period of time. Yet, to train and evaluate the deployed model, a custom dataset was designed for the task of weed semantic segmentation, as an initial approach to create the required challenging dataset. Towards this direction, a set of 1280 × 720 RGB images was collected during real-life experiments over a cotton field. The acquired images depict crop rows of cotton plants where different types of weeds interfere among the planting lines. The annotation of the collected data was conducted by field experts utilizing the LabelMe annotation tool [50]. The annotation process led to a set of 201 labeled images where depicted weed clusters are labeled as "weed" while the rest of the image is annotated as "background". The developed dataset [31] is publicly available 4 to the research community. To further demonstrate the challenging nature of the specific task, Table 4 presents the number of pixels for each class, among the number of depicted weed instances contained in the built dataset. As expected, the developed dataset is imbalanced due to the complex nature of the problem.
The developed dataset was split into a training (80%) and a testing (20%) set. The splitting process was repeated 3 times aiming to create subsets with different class distributions in order to conduct a more thorough evaluation of the employed detector.

Train & Evaluate
The deployed DeepLabv3+ instance was pre-trained on PASCAL VOC 2012 dataset and further trained on the previously described, custom dataset for the weed semantic segmentation task. The model was trained for 500 epochs with a batch size of 12 on a GeForce RTX 2060 Super GPU. As an optimization algorithm, Adam solver with a learning rate equal to 10 −3 was selected. To tackle the imbalance issue among the dataset classes, focal loss was employed. Moreover, data augmentation techniques were utilized to enhance the generalization ability of the model. In specific, every input image of the training set was horizontally/vertically flipped and randomly resized on a scale between 0.5 to 1.5 times of the initial size with a chance of 50%. Finally, a patch-sized 320 × 320 pixels were randomly cropped from the original image before being fed to the model, to further augment the training data.
The developed weed detector is trained and evaluated on the three different dataset splits, while its efficiency is measured in terms of Intersection-over-Union (IoU), a standard accuracy measure for semantic segmentation tasks. In Table 5 the IoU for each class among the overall mean-IoU (mIoU) is reported. Results imply that the deployed model is capable of correctly classifying the samples of background class with significantly high accuracy for all splits. Regarding the weed class, the accuracy is lower mostly due to the reduced number of samples in the dataset. Yet, one can consider the reported model performance satisfying by taking into consideration that for the weed detection task it is adequate to detect a wide region where weed clusters are present, compared to other applications of semantic segmentation, such as autonomous driving, where a crisp prediction of the detected object shape is required.

Graphical User Interface
All of the aforementioned services are packed inside an end-to-end precision agriculture platform called CoFly. The CoFly software is designed with the user workflow in mind to provide him with an interactive environment based on precision agriculture applications. The primary goal is to make scanning, imaging, and parametric analysis of crops easy and accessible via the user interface 5 (UI) (Fig. 12). These functionalities are adequate for most simple 5 https://github.com/CoFly-Project/cofly-gui missions that require surveillance of a new crop, analysis of photometric indices, and detection of intrusive plant species. To finalize the CoFly software as an integrated endto-end platform, the connection between the autonomous navigation system, the remote sensing control system, and the graphical user interface is essential to ensure stability in such a large-scale application. The communication of data among different modules is implemented by a Restful API Service [40] which allows the different sub-systems to connect and interact with each other by retrieving the necessary data for their execution. Data transmission between sub-systems is done via .json files which contain all the necessary information needed to run any operation of the overall system. Figure 13 illustrates the overall CoFly workflow along with the communications among its modules. It must be emphasized that each and every sub-system of Fig. 13 is integrated with the CoFly UI. A video demonstration illustrating the sequence of events among all the aforementioned services is attached to the following link 6

CoFly-to-UAV Bidirectional Communication System
For attaining a drone mapping project defined by the GUI, the coverage and inspection tasks are executed by forwarding the objectives of the corresponding mission to the UAV. As an intermediate communication layer between the CoFly software and the UAV, a smartphone that runs a mobile application is connected to the UAV's onboard controller. Towards this direction, in this paper, to accommodate a common commercial UAV, a custom android app adaptor has been developed with the use of the DJI API, as a data transceiver between Algorithm 1 & 2 with the UAV (as shown in Fig. 14). This application is responsible for forwarding the path to the UAV as well as gathering information (telemetry, photographs, statuses, and mission logs) and transmitting them to the backend of the platform.
Note that the aforementioned android application has been set up to provide i) a high level of autonomy for the missions and ii) a dynamic visualization of the flight data. Based on this logic, the interaction between the user and the CoFly software can be done entirely from the GUI, with no need to directly control the UAV with the controller or the smartphone application. Therefore, as the mobile application also works as an on-the-field pilot by allowing to monitor live the mission, the user can take control and directly operate the UAV, if needed. Note that according to the type of the UAV and its functional requirements, a different plugin as an intermediate communication layer might be required (see Adaptor in Fig. 13).

Real-life Experiments
This section presents two real-life experiments, where the proposed CoFly application was used to monitor, acquire data, and assess the vegetation health of real agricultural fields. The experiments were conducted in the fertile land of the Thessalian plain and more specifically in a rural area planted with cotton located in the city of Larissa, Greece. The functionalities and technologies of the CoFly software, as well as the produced results, were evaluated by the experienced agronomists of the Bios Agrosystems S.A. 7 , who specialize in crop production, soil control, and soil management. This section aims to evaluate the robustness and efficiency of the CoFly software in real-world precision agriculture operations. To evaluate the efficiency of the proposed framework, two scenarios were considered. The first scenario was performed in July 2019 during the first stages of plant growth, while the second scenario was performed in September 2019 during the ripening period, when the cotton fibers had appeared. The main objectives of these two experiments were to (i) identify the optimum flight envelope to acquire the best data and provide an orthomosaic as an accurate map of the field, (ii) extract plant health information (VIs) that could give indications on the year's harvest and cotton's growth stage (timeline view) and (iii) detect the existence of weeds in a specific site of crops. In the next subsections, each experiment is analyzed and distinguished according to the flight date and cotton's developmental phase.

Mid-Summer
As to the first scenario, the cotton field during its early stage along with the calculated path is depicted in Fig. 15.
To demonstrate the practical enforcement of the proposed navigation algorithm (Algorithm 1) in terms of avoiding NFZs/Obstacles, we considered an area within the original field as restricted, meaning that the UAV is not allowed to fly above. For coverage mission (see Table 6), the selected flight altitude and scanning density resulted in a spatial level resolution of 90%. In addition, the operational speed was selected to be 3 m/s, which is low enough to maintain the quality of the collected images, and the gimbal pitch was  selected to be -90 • (meaning that the camera is always in the "top-view" mode), an angle that calibrates images for 2D map outputs. Figure 15 graphically illustrates the resulted path as calculated from Algorithm 1. A direct outcome is that the path-planning algorithm managed to efficiently alter the UAV path to meet the field characteristics. More specifically, path (i) completely covers the area of interest, (ii) does not contain bootstrapping (path crosses), and (iii) carefully avoids the NFZs. The acquired images during the conducted mission are forwarded to the knowledge extraction module for further processing. Initially, the orthomosaic is calculated and presented in Fig. 16(a). It can be noted that the collected data are sufficient to create a high-resolution map of the examined agricultural area, implying the robust operation of the developed path planning module. In Fig. 16(b)-(e) the image representations of the calculated VIs are presented. The calculated VIs can provide a crisp overview of the crop's health, indicating areas with relatively poor vegetation conditions. Finally, in Fig. 16(f) the result of the points of interest detection module is illustrated for the VARI index. One can see that the implemented module can efficiently detect problematic, in terms of vegetation health, field regions where the site-specific mission should be deployed.
Given the geolocation of the depicted points of interest, Algorithm 2 was deployed to provide a site-specific inspection as illustrated in Fig. 17(a). Additional information corresponding to the Hot-Point mission for the inspection task can be found in Table 6. In Fig. 17(b) qualitative results

Fig. 18
Calculated coverage path for early-autumn mission Fig. 19 Results of the knowledge extraction module from the experimental evaluation conducted on early autumn from the site-specific mission in the aforementioned areas are presented. It is noted that the deployed weed detector can sufficiently detect regions where weed instances are depicted. The advantage of a semantic segmentation approach is also illustrated in the results since accurate spatial information regarding the existence of weeds in the field can be provided to the end-user through the selected approach.

Early-Autumn
As to the second scenario in the early autumn, the cotton field during its boll and fiber maturation along with the calculated path is depicted in Fig. 18. Note that the selected aspects of navigation for the coverage task were adjusted to achieve a spatial level resolution of 90%, while the UAV's altitude was set to be 50m (see Table 6).
In Fig. 19(a) the extracted orthomosaic of the examined area is presented. Similarly to the previous case, the designed path enables the collection of sufficient data to provide a detailed overview of the field. Figure 19(b)-(e) illustrate the calculated VIs, pointing out regions where the vegetation conditions are poor. Finally, in Fig. 19(f) the key outcome of the problematic areas detection module is demonstrated for the VARI index. It should be noted that the specific experiment was conducted in early autumn, a period in which cotton is harvested and the cotton plants are fully developed. Thus, any interfering weeds are covered by the foliage of the cotton plants and cannot be visually detected by an above inspection. Furthermore, the treatment of weeds can be considered an off-season activity during this time period, since it is more crucial to conduct it at the early stage of the crop's life cycle when the dominance of crop plants over the weeds is crucial. Towards this direction, the sitespecific mission for weed detection was not executed in this case.

Conclusions
UAV systems innovation is valuable in agriculture. Following leading-edge technologies, in this paper, an end-to-end and low-cost precision agriculture platform based on a fully functional autonomous UAV system has been proposed. In contrast to the majority of the already existing agriculture platforms, which use cloud-based approaches for processing agricultural data and establishing prices according to the information processing, the proposed platform has been designed to provide offline and real-time at the field's edge, stand counts, and assessments. Towards that purpose, coverage, and inspection missions for any type of UAV are developed to provide plant health maps, weed measurements, and detection by owning an integrated deep learning module.
All of these capabilities are packed inside an end-to-end platform functioning on mobile devices including high-level user commands and proper data representations that ease the utilization of UAVs in precision agriculture applications.
The functionality of the proposed system was tested and validated with extensive experimentation in agricultural fields with cotton in Larissa, Greece during two different crop sessions i) in mid-summer and ii) in early autumn. In both experiments, CoFly preserved all the desired features of common agricultural software, while at the same time, it achieved fully autonomous inspection tasks empowering the UAV robots also as a tool for site-specific precision farming. This feature is of paramount importance in agribusiness, as it can be utilized in information procurement, examination, and consistently observing fields by the farming community to acquire contemporary field management skills. CoFly software with a low budget provides beneficial and high accuracy insights to turn into the critical segments of the farming industry.
As future directions, we are interested in evolving this research by integrating multi-UAVs for monitoring and inspection tasks. In such an approach, the proposed agricultural software should be enriched with a multi-robot decision-making scheme to autonomously navigate each robot to perform individual coverage subtasks according to the overall set of the user's objectives. This work may examine more complicated scenarios where UAVs will also perform pesticide spraying treatments to have a robust system tailored to a variety of agricultural applications.
Funding Open access funding provided by HEAL-Link Greece. This research has been financed by the European Regional Development

Conflict of Interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.