Study site
The RHEA entire system was tested in maize (Zea mays L.), wheat (Triticum aestivum L.) and olive trees (Olea europaea L.), and the experiments were performed in two fields located in Arganda del Rey, Madrid, Spain, (40º18′ 50.241″, −3º29′ 4.653″ for wheat and 40º18′ 57.924″, −3º29′ 3.7134″ for maize and olive trees). These fields are inside the facilities owned by CSIC (RHEA project co-ordinator).
Experimental data were recorded with elemental sensors (ultrasonic sensors, encoders, etc.) and sophisticated devices (GNSS, laser), measuring mainly positions (m) and volumes (l). The recorded data were analyzed based on average values.
System architecture
To accomplish these specific objectives, the RHEA multi-robot system was broken down into seven main systems organized into two parts: stationary equipment and movable equipment (Fig. 2). The stationary equipment contains the systems and devices allocated to fixed positions during the mission, close to the working field. All of these elements (antennas, ethernet switches, routers and receivers) are physically installed in the base station (BS), which consists of a cabin that provides housing for the equipment and shelter for the operator; the BS is the operator’s work station. The BS is supplied with AC voltage and equipped with a computer to which the stationary systems are connected and the relevant software applications are run. The BS computer allows the operator to interact with the pertinent systems and modules through the Graphical User Interface (GUI), to define missions and control the multi-robot system through the Mission Manager. This software module computes and supervises the missions, controlling the main components of the movable parts, i.e., the UGVs, UAVs (through the UAV High-Level Controller) and related equipment, whenever needed. Figure 3 illustrates the BS system architecture.
The mobile robots (unmanned vehicles) are the elements of the movable part responsible for providing motion over the entire field for sensing crops and acting on them. Thus, the unmanned vehicles carry the Perception System through the fields. This system is composed of the Aerial Remote Perception System shipped on the UAVs and the Ground Perception System on board the UGVs.
The RHEA UAVs are the result of an innovative design based on a hex-rotor drone, whereas the UGVs are ground robots built on a commercial tractor chassis that is capable of carrying the actuation equipment (agricultural implements), automatic machines that perform physical (mechanical and thermal) and chemical (spraying) pest control in three different crops.
The geographic positions of the UGVs and UAVs are obtained with the Location System based on GNSS technology. Thus, the Location System provides every vehicle with its current position and also provides the Mission Manager and GUI with the positions of all vehicles. Therefore, the Location System has components in both the stationary (base antenna and receiver) and movable (vehicle antenna and receiver) parts.
The stationary and movable equipment are interconnected by the Communication System, which is based on several Wi-Fi technologies. This system is also distributed and provides the BS and the vehicles with the base antenna and the vehicle antennas, respectively.
RHEA is a complex system that requires intensive use to make it economically feasible. This efficiency can be accomplished using the contractor approach, i.e., a specialized company provides the requested services to individual farmers in different fields. Both stationary and movable equipment are moved from field to field using tracks or autonomously if the local legislation allows it. The primary goal of the RHEA project was to design, develop and integrate all of these systems with their relevant modules, as detailed in the following sections.
Unmanned vehicles
Unmanned aerial vehicles
The design of the new UAV was focused on fulfilling the requirement of carrying the Remote Perception System (1.5 kg) for approximately 30 min. The final drone, AR-200, was envisaged as an extension of the commercial quad-rotor AR100-B, manufactured by participant AirRobot, Arnsberg, Germany, (Fig. 1). Table 1 summarizes the drone’s main features.
Table 1 AR-200 main characteristics
The UAV high-level controller was developed in C++ on a Windows operating system to be executed on the BS computer and integrated with the GUI (Figs. 2, 3). This controller drives the drones during their flights through their respective UAV Ground Control Consoles, i.e., devices used by the backup pilot in the remotely controlled flight mode (Valente et al. 2013). The high-level controller translates the mission definition created by the aerial planner (a series of waypoints for each drone) to the specific protocol used by the drones, and, therefore, is responsible for creating the correct commands for the drones to execute the complete mission. This modular solution provides high flexibility in cases of changing drone protocols. The UAV high-level controller is also responsible for performing real-time alarm management, obtaining telemetry data from the aerial vehicles and translating data into meaningful physical variables.
Unmanned ground vehicles
The ground vehicles were based on a commercial vehicle chassis instead of a tailor-made robot structure. The selected chassis was the Boomer-3050-CVT (manufactured by project partner CNHi, Zedelgem, Belgium), illustrated in Fig. 4 along with the rest of the UGV sub-systems. This chassis exhibits the features detailed in Table 2. Using a long-tested, commercial platform to configure a mobile robot ensures:
Table 2 Characteristics of the Boomer-3050-CVT chassis
-
Greater reliability: these vehicles have been tested and improved over a long period of time, whereas robotic prototypes are always prone to malfunctions and thus require modifications throughout the development of the project.
-
Improved robustness: prototypes are weak systems when facing different working conditions not considered during the design phase.
-
Easier standardization: the proposed vehicles already fulfilled a large number of standards and had already been homologated for many tasks.
-
Earlier availability: the delivery of the UGVs could be advanced by nine months with respect to the initial working plan, allowing the participants responsible for providing the sub-systems to be on-board the UGV to greatly advance their developments, integration tasks and tests.
-
Ease of expanding/modifying the system: small mobile robots are designed with a very limited payload, jeopardizing the inclusion of additional sub-systems; however, the proposed vehicle had a large payload/weight ratio, allowing the designers to modify the system without the serious limitation of sub-system weight.
-
Ease in facing unforeseen problems: for example, those derived from the adaptation of the vehicles to specific requirements appearing from different sub-systems, such as the type and dimension of the UGV wheels and tires, the length of the wheel shafts (to cover two or three crop rows underneath the vehicle), extra payloads, etc.
Nevertheless, this solution impacts safety: heavier vehicles carrying heavier loads increase the risks linked to a mobile system. Thus, the safety system had to be designed accordingly.
The three commercial vehicles (Fig. 1) were mechanically, electrically and hydraulically modified. A subset of the ISOBUS CAN (ISOBUS 2011) protocol was used to communicate between the vehicle sub-systems (transmission, steering, brakes, valves, etc.) and the UGV controller responsible for handling trajectory control. An electro-hydraulic steering system and some electro-hydraulic valves that power the implements were installed to be controlled by ISOBUS CAN messages. Two DC actuators were installed to control the engine RPM and the height of the three-point hitch (the device where the agricultural tool is attached). A 12 V 200 A alternator and a 24 V 120 A alternator were mounted on the vehicles to supply the required power.
A touchscreen with two CAN ports, one connected to the higher level controller (ISOBUS), the other one to the vehicle CAN bus (SAE-J1939 2013), serves as the gateway. This device controls the handshaking procedure defined in the ISOBUS protocol to hand over the control of vehicle functions to the external controller (ISOBUS 2011). The display also provides a tactile user interface to manually control the different functionalities of the tractor such as steering, three point-hitch position and auxiliary valves.
The transmission controller software was modified to allow electronic control of the transmission instead of the original control using the throttle pedal. A new PID controller was designed to follow the speed set point received from the trajectory controller. This controller was also modified to allow activation of the system that provides the agricultural tool with power [power take-off (PTO)] and selection of the gears by CAN messages.
The High-Level Decision Making System is responsible for co-ordinating all the systems on board the UGV and making decisions on future actions. The hardware architecture for the HLDMS was designed, implemented and assessed relying on a compactRIO 9082 computer (National Instruments Corporation, Austin, Texas, USA) and the software structure consisted of a number of modules and functions, organized as illustrated in Fig. 5. This structure defines the communication (in terms of commands, responses and related parameters) between the HLDMS and every other sub-system connected to it. In particular, the communication between the HLDMS and the Mission Manager, the Weed Detection System, the UGV Controller and the User Portable Device was designed to allow the operator to supervise and control the multi-robot system.
Several versions of an HLDMS were tested, ranging from a version in which the HLDMS is a slave of the Mission Manager (the one used in the project final demonstration) to a version in which the HLDMS of a given robot controls the whole multi-robot system (Emmi et al. 2014).
Safety system
The safety aspects of both ground and aerial vehicles were evaluated in consecutive phases. First, a general study on current safety ISO and EN standards was conducted to determine possible connections between these standards. Then, the work was focused on each individual type of vehicle, adjusting the safety requirements to their specific characteristics. Finally, a Multi-robot Supervision System (MSS) was built to detect different failures and hazardous situations, such as tractor going out-of-track, incorrect working speeds, inappropriate implement status and potential collisions between robots.
The MSS is arranged as a distributed, multi-level system with two levels. The MSS Low Level consists of the UGV Safety System and the UAV Safety System that operate inside the UGVs and UAVs, respectively and takes care of the most urgent issues. The MSS High Level is responsible for the more complex supervision that involves the entire multi-robot system; this level is performed on an external computer that receives all of the information provided by the robots. An external device allows a human operator to monitor the underlying system and take control if needed. The proposed system performs the three main supervision functions: fault detection, fault diagnosis and fault recovery. The MSS is detailed in Conesa-Muñoz et al. (2015).
UGV safety system
In particular, the intrinsic risk associated with the UGV operations was analyzed, and the conclusions were used to install a safety controller on the UGV to achieve a higher safety integrity level for functions such as shutting down the engine, controlling the brake, controlling and monitoring the safety relays, and reading and monitoring the laser safety fields. This component is an independent controller based on a commercial programmable logic controller (PLC) that is able to stop the vehicle even in the case of failure of other controllers.
For safety, the braking system needed to be changed from a normally non-braking system to a normally braking system. Thus, an electronic brake with control of the brake state by CAN and automatic activation of the brake, in case of CAN communication failure, was installed in the UGV.
There are two safety levels in the UGVs:
-
Manual safety system: The manual safety system is responsible for brake activation and engine shut down control of the UGV when any of the three emergency buttons placed on the UGVs or the emergency button on the UGV remote controller is pressed (Fig. 4). When any of these buttons is pressed, the two safety relays of the receiver onboard the UGV connected to the safety controller are switched off, and the safety controller activates the brake and switches off the engine. The remote stop function of that system is SIL3 (Safety Integrity Level 3) certified (SIL3, 2016).
-
Proximity safety system: The proximity safety system was based on a range finder (laser) installed on the center of the vehicle’s front in a push–broom configuration (inclination) for the detection of obstacles along the vehicle trajectory (Fig. 4). This sub-system was connected to the PLC of the Safety System to activate the brake and stop the vehicle if an obstacle is detected in the safety zone. Intensive tests were conducted to define safety zones of variable size depending on the vehicle’s actual speed (dynamic size) and crop characteristics (fixed parameters). Additionally, many experiments were conducted to measure the lasers’ precision and adjust their configuration to real agricultural working conditions.
UAV safety system
UAVs do not rely on a proximity detection system since no aerial obstacles are expected during the flights. Thus, aerial multi-robot safety is based on creating safe flight plans that will be carried out under the supervision of the human operator required by the emerging aerial security legislation in Europe.
According to this, and considering that a multi-robot system instead of a single robot has to be managed, the safety has been entrusted to the aerial mission planner, which not only defines safety paths for the UAVs but also maximizes the distance between them during the flights. Maximizing the distance during the operation of the UAV turned out to be essential to reduce the stress of the operator.
Moreover, some operating procedures have been established in order to reduce the crash probabilities during takeoff and landing maneuvers, e.g. to ensure that landing pads are free of obstacles before ordering the landing or commanding different altitudes for approaching or leaving the working areas (where identical altitude should be maintained so as to make image resolution compatible).
Finally, it is worth noting that in addition to the safety margin that six rotors provide (i.e., the UAV is able to fly with only five rotors in a safe manner), the drone controllers have been developed considering robustness requirements in such a manner that they allow flying with no GNSS signal, although the operator is warned about this issue. As a result, this safety packet ensures safe operation of the UAVs during their operation considering human mistakes and mechanical failures.
Managing the multi-robot system
The Mission Manager is a software application developed for handling the multi-robot system. This application runs on the BS computer and is responsible for (a) generating the trajectories and actions for both aerial and ground vehicles, (b) supervising the trajectory and action executions and co-ordinating behaviors among vehicles when a conflicting situation or system malfunction is detected, (c) obtaining data from the remote perception system, and (d) interacting with the operator through the GUI (see Figs. 2, 3).
Two types of missions can be defined: (a) inspection missions carried out by the aerial vehicles and (b) treatment missions performed by the ground vehicles. At the outset of the process, the operator provides information related to the type of mission and the field specifications (crop type, field dimensions, geographical position, crop headings, etc.). This type of information allows the Mission Manager to make decisions on the number of UAVs for inspecting the mission field, to select the number of UGVs needed to accomplish the task and to provide an action plan for each vehicle (Mission Planner) (Fig. 6). During the inspection mission, the Mission Manager uses the Remote Perception System to obtain information to compute a weed distribution map, which is essential for generating the treatment mission plan in narrow-row crops. The Mission Manager also supervises the mission, making real-time decisions when unexpected events occur (Mission Supervisor) using the status information provided by the aerial and ground vehicles (mainly position, orientation, states of different modules, errors, etc.) during the execution of their missions. If the Mission Manager is unable to solve a conflict or manage a potentially dangerous situation, control of the multi-robot system is passed to the operator.
The Ground Mission Planner is related to the treatment mission and, for a given crop, determines the configuration of the set of ground vehicles (type and number of vehicles) as well as the plan for each of them to efficiently accomplish the treatment. The approach developed follows several optimization strategies that consider: (1) two possible criteria to be minimized: (a) the task cost, related to the travelled distance and the input costs (e.g., fuel, herbicides, labor), and (b) the time required to perform the task; (2) vehicles with different features (e.g., working speeds –both intra- and inter-field–, turning radii, fuel consumptions, tank capacities and spraying costs); (3) the variability of the field; and (4) the possibility that tanks must be refilled with herbicide or fuel during the task execution. Simulated annealing and basic genetic algorithms are used to find the optimal solution that minimizes either the task cost or the time required cost, while a NSGA-II (non-dominated sorting genetic algorithm) is employed as a proper approach for simultaneously minimizing both criteria. Figure 6 illustrates plans for both types of vehicle (Conesa-Muñoz et al. 2012, 2014).
Perception systems
The Perception System identifies weeds in the field and other elements in the robot’s path relevant to the mission (crop rows, obstacles and persons), and it is divided into the Remote Perception System, placed on-board the UAVs, and the Ground Perception System, placed on-board the UGVs.
Remote perception system
Detecting weeds when crop and weed plants are at early phenological stages and exhibit spectral and appearance similarities is a challenging objective that is overcome only by using high-spatial-resolution imagery (pixel size <0.05 m), which the current vision technology on board UAVs can provide (Lopez-Granados 2011). This high-spatial-resolution generates a high intra-class spectral variability that can be solved using segmentation by dividing the image into multi-pixel spectrally homogeneous regions named objects which are studied as the minimum information units using Object-Based Image Analysis (OBIA). The OBIA approach develops automated and auto-adaptive classification methods by combining the spectral, contextual (position, orientation), morphological and hierarchical information of objects. The detailed OBIA workflows were described by Peña et al. (2013, 2015). Three main activities are considered: first, (a) a set of overlapped images must be acquired to cover the complete field; then, (b) those images must be stitched together and ortho-rectified in a mosaic to finally (c) extract the weed patches using OBIA algorithms and taking into account the relative position of weeds to the crop lines, that is, every plant not located on the crop-row is considered a weed. Finally, a site-specific weed program based on the weed patch map is designed using a grid of 0.5 m. This grid dimension is customisable according to each cropping pattern and the specifications required by the herbicide spraying machinery. Figure 7 illustrates the procedure.
Image acquisition device
As a first step of weed patch detection, a vegetation-soil discrimination is required. To maximize its robustness under lighting conditions, a multispectral device including visible and near infrared (NIR) channels was designed. A solution that preserves the complete color information was based on the coupling of two commercial still cameras, one of them being modified to provide the NIR channel. A high-end camera combining a low footprint (compact) and very high resolution (4704 × 3136) with no Bayer matrix (Sigma DP2 Merril) was selected as the proper solution. This camera provides one-cm spatial resolution at ground level with a flight elevation of 60 m, each picture covering approximately 40 × 30 m.
Color and NIR images issued from the two cameras must be registered to provide a unique 4-channel picture. An approach based on the Fourier-Mellin (FM) transform was successfully developed and tested. This approach consists of identifying rotation, translation and scale changes between images using Fourier spectrum analysis. To cope with large-sized camera images, which imply non-linear transformations, the original images are partitioned into a set of small image portions, on which the FM identification process is repeated iteratively. A global homographic transformation model is then computed, including lens radial distortion. This procedure was implemented on the Graphics Processing Unit of the BS computer using Compute Unified Device Architecture (CUDA) libraries. A registration accuracy of 0.3 pixels was obtained (Rabatel and Labbé 2016).
Aerial image mosaicking
Due to the unevenness of the field, overlapping source images are necessary so that every point in the parcel can be observed from at least two different points of view, allowing for the creation of a 3D digital surface model. Source images can then be ortho-rectified and combined. Moreover, color targets as ground control points with known geodesic co-ordinates are necessary for the geo-referencing of the resulting mosaic (Fig. 7).
The mosaicking process was implemented using an open-source package (MicMac software, IGN) and a few software modules developed to perform the automation, including picture synchronization with the flight log, automatic detection and labeling of ground targets around the parcel for geo-referencing, communication with the base station, and other modules (Rabatel and Labbé 2015). The resulting geo-referenced, 4-channel image of the parcel with a resolution of one cm is the input for the weed patch detection module.
The resulting georeferencing accuracy mainly depends on the quality of initial georeferencing of ground targets, which is a manual operation (a RTK GPS receiver is successively positioned on each target). According to ground target co-ordinates in the resulting image, an absolute accuracy of a few centimeters is obtained, which is sufficient for further weed patch detection and retrieval. The important point is that this limited accuracy does not jeopardize the quality of multispectral data, because the multispectral registration has been made before (with sub-centimetric accuracy) and does not depend on it.
Weed patch detection
Two OBIA algorithms corresponding to weed patch detection in maize and wheat crops were developed and tested in various real crop situations. Firstly, a multi-temporal study (six UAV flight dates at seven-day intervals) was developed to determine the suitable early phenological stage for weed mapping in both crops being weed and crop plants in the principal stage 1 (leaf development) in the beginning of the experiment, and at the principal stage 2 (tillering for wheat and four true leaves for maize, from the BBCH (Biologische Bundesanstalt, Bundessortenamt und CHemische Industrie) extended scale (Meier 2001). Once the best early growth stage had been assessed, four locations (two wheat fields and two maize fields) were flown over to validate the results by using a systematic on-ground sampling procedure (more details in Peña et al. 2015). One of the first steps to map weeds in both crops was to discriminate bare soil from vegetation fraction (weeds and crop rows) using different vegetation indices and Otsu’s thresholding method (Otsu 1979; Torres-Sánchez et al. 2014, 2015). Once the vegetation objects were discriminated, the crop-row structure was classified mainly using the crop row orientation by an iterative process. Finally, the vegetation objects that were located on the non-crop area were classified as weeds (López-Granados et al. 2016). Figure 8 illustrates the entire process for detecting weed patches in a field 40 m wide and 60 m long with nine 3 × 3 m weed patches equally distributed and located at the locations indicated in Table 3.
Table 3 Location of weed patches
Ground perception system
The Ground Perception System was designed to detect weeds, both inter- and intra-row, on maize fields. This system was based on an SVS-VISTEK camera connected to the HLDMS computer for acquiring images and running the relevant vision algorithms. The system was placed on board the ground vehicles (Fig. 4) and fully integrated with the HLDMS (Romeo et al. 2013).
The UGV operation speed for this application was fixed at 0.83 m/s (3 km/h), and the Region of Interest (ROI) for the Ground Perception System was defined to be 3 m wide and 2 m long and located in front of the UGV. With these parameters, the Ground Perception System has to deliver the data provided by the GNSS for image georeferencing and the data after image processing concerning the identified crop lines and weed coverage in less than 2.4 s, i.e., all required data duly processed and ordered with correct synchronization between them. Experimentally, the final average time used by the perception system was lower than 0.35 s.
Weed detection relies on the spatial identification of crop rows. Thus, determination of crop-row positions with respect to the UGV becomes a crucial task for weed patch detection and UGV guiding.
Crop-row detection
In the last two decades, several strategies have been proposed for crop row detection based on (a) exploration of horizontal strips, (b) Hough transformation, (c) vanishing point-base, (d) stereo-based approach, (e) blob analysis, (f) accumulation of green plant, (g) frequency analysis and (h) linear regression. These methods are described in Guerrero et al. (2013). In this specific case, the crop row positions were determined with respect to the UGV for UGV guiding and weed detection purposes. A vision system has been used to process images to find four crop lines containing two approaches with similar performance. The first is inspired by the Hough transformation and is detailed in Romeo et al. (2013). The second applies linear regression based on the robust Theil-Sen estimator, detailed in Guerrero et al. (2013), where green plants are segmented by the thresholding method proposed in Montalvo et al. (2013). An inertial measurement unit (IMU) provides information regarding external camera parameters, pitch (α) and roll (θ), so that, along with all other parameters (both internal and external), four predictable crop rows are identified, serving as the basis for identifying the four real lines through the two methods mentioned above. The two central crop lines determine the UGV steering correction by computing the deviation of the UGV longitudinal axis with respect to an imaginary vertical line that divides the image into two equal portions. That is, the UGV (Fig. 9a) undergoes a slight deviation from the planned trajectory (approximated by the red line). This misalignment can be corrected by the GMU in the subsequent sampling periods, as illustrated in Fig. 9b. This method provides real-time features for robot guidance and weed detection.
By using a set of 400 selected images, the corrections to be ordered by the Perception System were established. Two levels of guidance are continuously established during the UGV navigation, one based on the GNSS that guides the UGV following a swath and a second based on the Perception System, which determines small deviations if any, as expressed above, and commands the required correction when required. After each correction, the algorithms confirm that the UGV in the next image in the sequence is positioned correctly. Testing showed that, on average, a correction was required for 30 % of the images and the vehicle was positioned correctly in 89 % of the successive images. For all other images, the algorithm for path tracking based on GNSS assumed full responsibility for the guidance.
Weed detection
For each image, a density matrix containing green density values covering the inter-row lines of weeds associated with the image was computed and stored. The densities were transformed to the following linguistic labels: low, medium and high. Figure 10 illustrates two consecutive images obtained in a sub-path. There are three types of lines limiting the cells required for computing the density matrix defined as follows:
-
(1)
After identifying the crop lines (in green), they are restricted to the ROI in the image (yellow lines).
-
(2)
To the left and right of each crop line, parallel lines are drawn in red. These lines divide the inter-crop space into two parts.
-
(3)
Horizontal lines (in blue) are spaced conveniently in pixels so that each segment is separated from the adjacent lines in the ground by approximately 0.25 m.
The above segments define 8 × 4 trapezoidal cells c
ij
(i and j represent rows and columns, respectively). For each c
ij
, the total number of pixels, A
ij
, and the number of green pixels (represented in cyan), G
ij
, are computed. Thus, a weed coverage matrix is computed as w
ij
= G
ij
/A
ij
. A tolerance margin representing approximately 10 % of the width of a cell is excluded, assuming there are mainly crop plants instead of weeds in this margin. By using approximately 200 images, the weed coverage was classified into three levels (Low, w
ij
≤ 1/3; Medium, 1/3 < w
ij
≤ 2/3; and High, w
ij
> 2/3). With these levels (selected by an expert), the system obtained a 91 % success rate. Weed patch position and coverage are the inputs to the Actuation System.
It is worth noting that the Ground Detection System is a real-time procedure that cannot be used in narrow-row crops. Therefore, the Remote Detection System is used to solve this problem in those types of crops, with the shortcoming that it is an off-line procedure. The Remote Detection System can also be used in wide-row crops, but a real time procedure is preferred. The comparison between these procedures was out of the scope of this study.
Actuation systems
The Actuation System is the automated agricultural equipment that acts on the crops directly. Three devices were developed in the RHEA project. Two of them were based on spraying techniques: (a) a weed patch spraying system for herbicide application in cereal fields, (b) an air-blast sprayer for olive-tree canopy treatment; and (c) another that relied on a mechanical/thermal removal tool for weed control in maize. These agricultural tools are illustrated in Fig. 11.
Weed patch spraying system
This system consists of twelve high-speed solenoid valves mounted on a stainless steel sprayer boom with an equidistant spacing of 0.5 m, providing a lateral resolution of 0.5 m. These solenoid valves comprise a brass inlet for incoming liquid, a spray nozzle, a nozzle cap and a LED indicator, which is lit when the solenoid is open (Fig. 11a). The boom sprayer has a fold/unfold mechanism managed by two electric linear actuators. The boom is folded along the sides during transportation between fields and also to allow the UGV to perform rotations in the field headlands and for avoiding obstacles. A commercial central direct injection system was equipped with a water tank (200:1) and a separate container for the herbicide (15:l) to be injected according to the information coming from the High-Level Actuation System. The injection system controller handles the injector pump to turn at the appropriate speed to reach the desired concentration and to stabilize the injected flow afterward. An encoder integrated in the system measures the flow rate of the active ingredient from the injector pump speed. The electronic control device uses the active ingredient flow rate from the pump speed to determine whether a change in active ingredient flow rate is required. The active ingredient flow rate is verified using a mini-flow meter (Perez-Ruiz et al. 2015).
This development was designed for three different herbicide densities corresponding to three weed/crop ratios: low volume application (100 l/ha), standard volume application (200 l/ha) and high volume application (400 l/ha). The injection system controller supplied a variable voltage to the gear motor to power the injector pump. This voltage caused the injector pump to turn at the appropriate speed to generate the desired flow rate of the active ingredient. An encoder integrated into the system measured the flow rate of the active ingredient based on the injector pump speed. The controller used the active ingredient flow rate from the pump speed to determine whether a change in the active ingredient flow rate was needed.
The mixing chamber of the injection system ensures that the flow of the agrochemical that is incorporated into the stream of water will be evenly distributed throughout the resulting volume.
In this test, there was no comparison of weed density (weed m−2) to determine the effectiveness of different treatments. The objective of this section was to develop a field sprayer able to operate each nozzle independently commanded by prescription map information.
Realization of individual tests of the High-Level Actuation System and Device System as well as many synchronization and integration tests produced savings of approximately 96 % in the applied liquid for a field infested with weeds covering approximately 3.24 % of the field. For an infested area of approximately 10 % of the total area, the saving fell to 90 %. However, the objective target (75 %) was achieved for a weed infestation area smaller than 25 % of the total area.
Canopy sprayer
A robot airblast sprayer was designed for the application of agricultural pesticides in olive groves, woody crops and orchards. The developed solution was based on a commercial airblast sprayer (Oktopus by Nobili Molinella, Italy), which consists of a high-pressure pump and some spray diffusers to nebulize the mixture, along with a high volume centrifugal flow air fan to distribute the liquid pesticide. The spray diffusers were placed in two vertical steel struts, 4 in each one, and separated by approximately 0.5 m. The design principles were focused on the Real-Time Volumetric Flow Control, i.e., to vary in a separate and independent way the dose, the air flow rate and its orientation related to the target features. The spray optimization relied on (a) a perception system based on ultrasonic sensors, (b) three types of kinematic coupling driven by stepper motors and (c) a solution for liquid management (Fig. 11b).
The perception system, consisting of eight ultrasonic sensors, was also placed in two vertical steel struts, 4 in each one. Every sensor is associated with one spray diffuser and placed at the same vertical height and approximately 1 m in advance in the horizontal plane. Thus, the sensors measure the canopy distance t seconds before the associated spray diffuser reaches the same point (t = 1 m/vehicle speed). The sensing range goes from 0.03 to 2 m with a sensing repeatability of ±0.25 % of the measured distance.
To improve spraying, the tree canopy was virtually divided into four overlapping divisions parallel to the ground. Every division is centered with the horizontal plane defined by a diffuser and its associated ultrasonic sensors (Fig. 12).
The automation procedure was designed on the following principle: the ultrasonic system detects an object at a distance x, which is sent to the Low-Level Actuation System or implement controller, which, in turn, recognizes the corresponding scenario and generates the signal to (a) determine the opening of the valves and (b) control the air flow rate in the adductor tubes on the partition cap performed by eight on/off small butterfly valves. Finally, the optimization involves addressing the airflow at the tree canopy. This objective was achieved by varying the attitude angle of the terminal spray modules (upper and lower in each strut). This was managed by a motor that permits rotating the spray diffuser in the range of −15° (downward) to 15° (upward) and, thus, orientating the airflow.
To improve the spraying features (air and liquid flow rate), some prescription rules for every actuator were defined according to the olive tree canopy characteristics. Four cases were considered:
Case 1
-
Sensing—every ultrasonic sensor measures a distance x ≤ 0.5 m (full canopy detected).
Actions—upper and lower spray modules parallel to the ground (0°); the air flow rate at maximum; spraying active in each spray module.
Case 2
-
Sensing—every ultrasonic sensor measures distance x in the range 0.51 m < x ≤ 1.5 m.
Actions—the upper and lower spray modules are inclined approximately 15° toward the center of the canopy tree; the airflow rate at maximum; spraying active in each spray module.
Case 3
-
Sensing—the higher and lower ultrasonic sensors do not detect canopy, but the central sensors detect a target distance x < 1.5 m.
Actions—spraying and airflow rate are activated in only the central spray modules.
Case 4
A pressure sensor checks the functioning of the hydraulic plant. If the working pressure is wrong, the Low-Level Actuation System sends a message to the Mission Manager (through the HLDMS) to let the operator take control of the system.
The work carried out with the canopy sprayer relies on the physical application of pesticides rather than on the agrochemical doses. Thus, the system does not change the chemical application rate on the target and then it does not need an injection system to vary the concentration of agrochemical.
The standard physical application on the tree foliage that involve percentage of coverage on surface, or number of droplets per area, and, for more detailed analysis, the specific doses deposited per area were taken into account. The aim is to reproduce the standard application on the target, avoiding the waste due to off-target depositions.
The off-target deposition, or drift internal to the plant or outside the treated field, is caused by continuous spraying where there is no foliage and by the transport by air vector beyond the canopy width when it is thinner than the maximum.
Rules were defined that stop the application of liquid when there is no foliage coverage detected and apply 3 different levels of liquid mixture according to requirement in 3 different steps of canopy width (30, 70, 100 % of conventional application).
The particular cases of the upper and lower nozzles that can be tilted are due to the fact that the upper and lower parts of the tree canopy are not regular and are particularly susceptible to diseases. At the top, there are young shoots that have plant tissues easily attacked by diseases. Similarly, the lower parts, close to the ground are influenced by soil moisture. In a targeted application, these areas need to be sprayed with special attention to obtain the conventional accepted rate of application.
Note that this implementation requires no action from the HLDMS (except a signal to start/finish the mission); therefore, this agricultural implement is autonomous from the control standpoint (Perez-Ruiz et al. 2015).
Mechanical and thermal tools
A physical weed controller was designed to perform mechanical and thermal weed control in maize. The mechanical weed removal implement was designed to remove the inter-row weeds, and it was based on shallow soil tillage intervention, carried out with different end-effector tools tailor-made for robotic applications. The thermal weed removal tools were envisaged to remove the intra-row weeds, and they relied on liquefied petroleum gas (LPG) fed rod burners with an open flame (dry heat). This implement was envisaged to be developed using current robotic technology and consists of a deployed mechanical structure.
This implement is based on an inter-row cultivator structure with a working width of approximately 3 m. The machine consists of five cultivator units, i.e., three complete units and two side half units, enabling treatment of four rows of the crop, including three inter-row spaces of 0.75 m each and two half lateral inter-row spaces of 0.375 m. Each of the three complete units tills a 0.5 m wide strip of soil between the rows using one goose-foot rigid central tool and two L shaped adjustable rigid side sweeps working at a very shallow depth (0.03–0.05 m). The two side-half cultivator units are provided with one goose-foot rigid element and one L shaped sweep to work the two side-inter-row strips (0.25 m wide) of soil. All the rigid elements are adjustable, allowing the tool to work on inter-row spaces ranging from 0.3 to 0.9 m properly.
The mechanical tools for weed removal are mounted on an adjustable articulated parallelogram structure, providing two little pneumatic wheels that, during the treatment, follow the soil surface, which maintains the correct working depth of the rigid elements and the correct distances of the burners from the ground.
The site-specific physical-weed-control (PWC) implement was coupled with a UGV, and the entire system became autonomous. The application of flaming is based on the Weed Detection System attached to the UGV. When this system detects weeds, data with the value of greenness is sent to the High-Level Decision Making System installed on the UGV and then to the Low-Level Actuation System mounted on the implement. Three levels of weed cover (low, medium and high) are supplied by the Weed Detection System to perform non-treatment or treatment with medium or high LPG pressure (and, consequently, null, medium or high LPG biological doses). According to the percentage of weed cover detected by the Weed Detection System, the choice between the three levels of LPG pressure is made in real-time to obtain the proper dose to control weeds. Figure 11c illustrates this implement (Frasconi et al. 2014).
Experiments and a subsequent study on the effect of the LPG doses in maize crops and weeds for different growth stages concluded that the achievable weed killing rate can reach a weed reduction of more than 90 % in terms of both density and biomass at harvest, obtaining maize yields similar or higher than those normally obtained in the conventional way. This confirmed the results obtained in previous experiments on cross flaming application on maize and other row-spaced summer herbaceous crops (Peruzzi and Raffaelli 2000; Perez-Ruiz et al. 2015).
Communication system
The RHEA Network system was realized by using a Linux-based wireless router providing IEEE 802.11b/g, IEEE 802.11a, IEEE 802.14.5 ZigBee PRO and ITU GPRS interfaces. The high-level communication system architecture was based on the multi-technology management approach, where different communication paradigms (publish/subscribe and request/reply) and services (best effort and enhanced reliability) were designed to utilize multiple interfaces. An interface based on TCP/UDP messages, implemented as a Network API, was used by all communication components. Accordingly, an IP addressing scheme for the overall system was defined, unambiguously addressing every component located on board a UGV or on the BS. To assure traffic prioritization and quality of service, the IEEE 802.11e standard was chosen. Finally, the communication system software, implemented as the Middleware, runs on a Linux-based Wireless Router as the execution platform (Hinterhofer et al. 2012).
Location system
The location system accounts for the relative GNSS position of the robots with respect to the BS where the GNSS base antenna is located. For the location system, four RTK-GNSS receivers (Trimble BX982, Sunnyvale, California, US), providing ± 0.02 m accuracy, were used to accurately locate the robots for all field trials. One receiver was used in the BS (reference antenna), and the other three receivers were on board the 3 UGVs (with two GNSS rover antennas). The BX982 GNSS receiver was configured as an autonomous reference station. Streamed outputs from the receiver provide detailed information, including the time, position, heading, quality assurance numbers and the number of satellites in view. The receiver also sends a strobe signal at 1 Hz, which allows the remote devices to synchronize in time precisely. The main task of this GNSS receiver is to provide and transmit the GNSS correction signal to each robot to determine the location of the UGV accurately, obtaining centimeter-scale accuracy on RTK-GNSS receivers and decimeter-scale precision on differential GNSS (DGNSS) receivers. A single connection (RS232, USB, Ethernet or CAN) to the BX982 delivers centimeter accurate positions with better than a tenth of a degree (2 m baseline) heading accuracy.
Graphical user interface
The graphical user interface (GUI) is a software application running in the BS computer (Fig. 3), where the model of the real system is represented in 3D and the user can interact in different ways to obtain information regarding what is currently occurring with the real system or to send commands to the system. The system displays the status of the mission, the position of the vehicles in the field, the sensor measurements, the camera images and other information received from the Mission Supervisor, and it can control the UGVs remotely as well as manage the mission execution. Moreover, the same program allows the operator to execute a real mission in the field or simply run the planned mission in simulation to evaluate its efficiency and test emergency maneuvers.
The core of the GUI consists of the commercial package Webots 7 robot simulator (developed by project partner Cyberbotics, Lausanne, Switzerland) that computes dynamic simulations using a realistic physics engine and represents the robots and relevant equipment in a 3D space. Another independent software module was developed to provide a single interface for the different partners’ modules (for example, the UAV Mission Planner and the UGV Mission Planner). This additional program constituted the entry point for the human operators and allowed them to set the parameters and manage the 3D environment to simulate missions or supervise them.
Some models of the real operational setup were developed to be able to accurately reproduce the setup in the Webots simulation. Figure 13 illustrates what the GUI looks like.