Abstract
Robotics for agriculture and forestry (GlossaryTerm
A&F
) represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient and important industries. Over the course of history, mechanization and automation increased crop output several orders of magnitude, enabling a geometric growth in population and an increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. This chapter addresses robotics for A&F in the form of case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing. The chapter is organized in four main sections. The first one explains the scope, in particular, what aspects of robotics for A&F are dealt with in the chapter. The second one discusses the challenges and opportunities associated with the application of robotics to A&F. The third section is the core of the chapter, presenting twenty case studies that showcase (mostly) mature applications of robotics in various agricultural and forestry domains. The case studies are not meant to be comprehensive but instead to give the reader a general overview of how robotics has been applied to A&F in the last 10 years. The fourth section concludes the chapter with a discussion on specific improvements to current technology and paths to commercialization.Keywords
- Total Factor Productivity
- Unmanned Aerial Vehicle
- Precision Agriculture
- Laser Rangefinder
- Protected Cultivation
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Robotics for agriculture and forestry represents the ultimate application of one of our society’s latest and most advanced innovations to its most ancient industries. Since the dawn of civilization, agriculture and forestry (A&F for short) remain chief among humankind’s most important economic activities, providing the food, feed, fiber, and fuel necessary for our survival. (In this chapter, agriculture is understood as in the Merriam-Webster definition:
the science, art, or practice of cultivating the soil, producing crops, and raising livestock and in varying degrees the preparation and marketing of the resulting products.
Therefore, the term crop may be used here to denote any product of an agricultural or forestry process, including grains, cereals, fruit, vegetables, nuts, trees, beef, wool, etc. Whenever necessary, we will differentiate plant from animal products appropriately.)
Over the course of history, mechanization and automation – from the manual ploughs of yore to the modern combines of today – increased crop output several orders of magnitude. This, in turn, enabled a geometric growth in population and a corresponding increase in quality of life across the globe. Rapid population growth and rising incomes in developing countries, however, require ever larger amounts of A&F output. Scientists predict that agricultural production must double to meet the demands of nine billion people in 2050 [56.1, 56.2, 56.3]. Clearly, this cannot be achieved by simply doubling the inputs (land, water, seeds, labor, etc.) because of constrained resources and environmental concerns. Therefore, the efficiency of the A&F system must increase in a sustainable and consistent manner.
According to Reid [56.4], global agricultural total factor productivity (GlossaryTerm
TFP
), or the output per unit of total resources used in production, must increase from the current 1.4 to 1.75 to double agricultural output by 2050. This requires significant scientific, technologic, and management advances in all of the factors that impact TFP – seeds, soil, water, fertilizers, herbicides, insecticides, crop architecture, cultural practices, automation, labor, public policy, etc. While robotics is but one of these factors, it is one with the potential to effect A&F in a broad, systemic way, and contribute significantly to meeting our future needs.In this chapter we address the field of agricultural and forestry robotics from the point of view of the applications it enables, rather than from the point of view of the elementary technologies it comprises. Furthermore, we chose to present a limited number of case studies where robotics is being successfully applied to solve well-identified problems, rather than a comprehensive survey of all work reported in the literature. We believe the former is more meaningful as it showcases top-down, problem-oriented solutions (market pull) rather than bottom-up, technology-led ones (technology push). We focus on examples from the last five to ten years, as they represent work that leverages the most recent advances in sensors and computing.
We address applications of agricultural and forestry robotics in the form of twenty case studies where robotics is being successfully applied to solve well-identified problems. With respect to plant crops, the focus is on the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time, before the crop is transported to a packing plant or warehouse. In the livestock domain, the focus is on breeding and nurturing, exploiting, harvesting, and slaughtering and processing.
1 Section Scope
Before presenting the case studies that showcase recent advances in robotics applied to A&F, we must define the scope of the chapter, both in terms of crop production processes and robotic technologies.
A typical crop production cycle includes several processes, among them field preparation, seeding/breeding, transplanting, planting, growing, maintenance (including attaching plants to support structures, disbudding, removing leaves, pruning tree limbs and shoots, thinning blossoms and fruit, nurturing animals, etc.), exploiting/harvesting/slaughtering, sorting, and packing. In all of them, internal transport of people, machines, and produce play a role. Single-harvest crops such as lettuce must be replaced once harvested; multiple-harvest crops such as apples, tomatoes, and roses last a year or even several years before they need to replanted. Depending on the crop, machinery with varying levels of automation exists for some or all of these processes. In grain and cereal production, for example, farmers have access to commercial machines for tilling, seeding, transplanting, spraying, irrigating, and harvesting. In fresh fruits and vegetables, on the other hand, mechanization and automation are more prevalent at the early and late stages of the production cycle [56.5], with crop maintenance and harvest remaining for the most part manual tasks.
With respect to plant crops, this chapter’s scope is limited to the in-field or in-farm tasks necessary to guarantee a quality crop and, generally speaking, end at harvest time, before the crop is transported to a packing plant or warehouse. Of course, there are many opportunities for robotics in those latter environments, e. g., automatic sorting and grading. We exclude them from our presentation as they are currently much more the focus of automation efforts than of robotics per se. In future editions of the Handbook, we will revisit this separation as robotics advances to post-harvest tasks.
In the livestock domain, much of farming concerns the nurturing, exploitation, harvesting, and slaughtering of animals. Slaughtering extends beyond the killing of the animal to its division into marketable portions, and subsequent packing and marketing. The essential processes considered in this section pertain to land animals and birds and include:
-
Breeding and nurturing: Livestock can range from poultry to cattle in feedlots, including pigs and other animals. Their care involves environment and behavior monitoring, plus fodder distribution.
-
Exploiting: In many cases, products are obtained from the live animals. Sheep are shorn for wool, cows are milked, chickens lay eggs, and bees produce honey.
-
Harvesting: Mustering and collecting free-range cattle, feral pigs, and other unconstrained animals.
-
Slaughtering and processing: Cattle, poultry, pigs, other animals are killed subject to strict regulations, then divided into marketable portions.
In a future edition of the Handbook fishing should be included, since many opportunities for robotics can be imagined there.
2 Challenges and Opportunities
Since the end of the industrial revolution, (arguably) the three most significant impacts robotics and automation have had on agriculture and forestry are:
-
1.
Precision agriculture, or the use of sensors to precisely control when and where to apply inputs such as fertilizers and water
-
2.
Auto-guidance on field crop machinery, which today can drive down a field with an accuracy unattainable by human drivers
-
3.
Machines that harvest fruits and vegetables for processing (e. g., tomato paste and orange juice).
Academic and commercial researchers are now focusing on the next wave of sensing, mobility, and manipulation technologies that promise to increase A&F output and productivity.
Sensing entails measuring crop temperature, humidity, pH, wetness, image, range, and other physical attributes, and combining and analyzing the data for specific purposes. One example is a camera-based system that takes pictures of an apple orchard a few weeks before harvest and produces an accurate crop yield estimate that growers can use to plan and manage the harvest operation [56.6, 56.7]. The utility of sensing to A&F is that it enables decision-making at a level unattainable by human sensing alone, because the latter is either inherently inaccurate or slow or both.
Mobility relates to various levels of vehicle automation that enable driverless (or driver-assistive) field coverage; the most common example is a global positioning system/global navigation satellite system (GlossaryTerm
GPS
/GlossaryTermGLS
)-guided combine that harvests corn with minimal intersection between passes on the field, thus optimizing coverage and minimizing fuel consumption [56.8]. More recently, auto-guidance has started to migrate to orchard vehicles as well, albeit here other navigation sensors may be required because of poor satellite reception under thick canopies ( , ). In A&F, automated vehicles equipped with the appropriate implements enable (semi-) autonomous seeding, spraying, mowing, weed removal, harvesting, and animal feeding, among other operations ( , ). Mobility requires some level of sensor-based perception, provided by GPS/GNSS, inertial units, cameras, ladars, radars, etc. These sensors are not to be confused with those described in the previous paragraph, although there certainly are situations where a sensor can perform double duty of sensing for decision making and for navigation. In general, robotic mobility technology is currently less advanced than sensing.Manipulation refers to the various operations performed directly on the crop, including pruning, thinning, harvesting, tree training, leaf probing, tree cutting, weed removal, etc. In general, this technology requires more sophisticated sensor-based perception than mobility, and in terms of field deployment is less advanced than either sensing or mobility.
Two domains that are particularly challenging for robotics research are orchard crops and crops in protected cultivation. (We use the term orchard liberally to include grape vineyards and orange groves, among other similar environments where crops are grown in well-defined rows.) These are highly valuable crops, potentially generating one to three orders of magnitude more revenue per acre than field crops. They are also characterized by the need for intensive cultivation and skilled labor. Consider, for example, apple production in the US – no less than 50 to 60 % of the variable costs to produce an apple is due to labor [56.9]. On top of that, labor is needed in bursts – in Washington state, in the US, up to seven times more workers are needed during the fall harvest season than during the winter pruning season. In many developed countries, labor availability for manual orchard operations is a challenge, putting significant pressure on growers to find innovative solutions to address their labor requirements. In the 2011 Tree Fruit Industry Perspective publication, industry leaders with the US Northwest Farm Credit Services say that:
[] seasonal labor force utilized by the tree fruit industry for thinning and harvest operations will always be an issue of concern []. Labor shortages at various times throughout the growing season have occurred and will likely occur in the future. The perishable nature of the tree fruit mandates a strict timeliness in field operations, and the generally narrow harvest windows do not accommodate labor shortages at critical stages. [] New technology [] could reduce labor requirements substantially over the next five to 10 years. Specifically, such technology would include the use of platforms and mechanical-harvest methods in the orchards, and increased use of robotics [].
Partly in response to the high labor costs and partly to increase production efficiency, the tree fruit industry is moving in the direction of highly structured planting architectures. Whereas before an apple grower had two to four hundred trees per acre, she now has 1200 apple trees per acre in a fruit wall configuration that is much more conducive to automation (Fig. 56.1a). This has led to the development of autonomous vehicles and tractors that drive from garage to block and traverse tree rows for hours on end, including turning from row to row, without any human assistance [56.10]. When equipped with the appropriate implements, these robotic vehicles can mow, spray, and collect tree and crop data for inventory management; and when configured as platforms, they can carry workers pruning, thinning, training trees, and harvesting the top part of the trees, thus eliminating the inefficiencies and injuries related to ladders. In the future, manipulators mounted on such vehicles will be able to probe plants for phenotyping purposes, and automatically prune, thin, and harvest.
A protected cultivation system is a powerful instrument to produce crops (Figure 56.1b). Greenhouses protect crops from unfavorable climate conditions and pests and offer the opportunity to modify the climate to create an environment that is optimal in terms of both crop quality and quantity. A protected cultivation is an intensive production method with high investment and operational costs, therefore permitting only the production of high value fruit and vegetable crops such as tomatoes, sweet peppers and cucumbers, flowers such as roses, chrysanthemums and gerberas and many types of potted plants. In the past decades, in western societies, this type of production has been confronted with increasing size of production facilities, increasing labor costs, reduced availability of sufficiently skilled labor, health problems of the employees due to heavy and repetitive tasks, and growing competition on the national and international markets. Automation and robotics are considered to be a way to address these issues. Additionally, growing concerns with food safety call for such technology. Last but not least, more and more a precision horticulture approach is adopted in which plants are treated on an individual basis, to improve quantity and quality of crop production whilst using resources as efficiently as possible. Given the current constraints on human labor, this has led to an even stronger call for automation and robotics.
Within the context of agricultural production, in terms of production area, protected cultivation is worldwide a relatively small business. Total area in use with protected cultivation worldwide is estimated at roughly 740000 ha [56.11]. In terms of added value, however, protected cultivation plays a much more important role. In The Netherlands at a few percent of the area available for agricultural production, horticulture produces roughly 35 % of the economic return of total agricultural production. In that case production is very capital and labor intensive. On a global scale, the potential of protected cultivation is being increasingly acknowledged. Yet, technology levels still vary widely, largely based on significant differences in local conditions in terms of market, economy, resource availability, etc.
Robotics research in protected cultivation has a history of some thirty years [56.12], focused mainly on harvesting and chemical spraying operations. Robotic harvesting focused mainly on tomatoes, cucumbers, eggplants, sweet peppers, and strawberries, for which multiple examples of research prototypes have been developed [56.12] ( ). Single examples are known of a harvesting robot for roses [56.13] and gerbera [56.14]. Spraying robots have been reviewed in detail in [56.15]. Only two robotic systems, other than for harvesting or spraying, were found in the literature: a leaf picking robot for cucumber [56.16] and a robot for bagging, thinning, and spraying in grapes [56.17]. In protected cultivation robots have not yet been developed for actions like pruning, disbudding and attaching of plants to support structures.
Challenges in livestock production are of a more practical nature. When cattle roam freely they can be difficult to find and muster. Spotting by helicopter is expensive so there is great potential for the use of unmanned aerial vehicles. The method of collecting feral pigs and native kangaroos is usually by shooting, supplying markets in Germany and Russia respectively. There could in theory be an opportunity for the application of robotics, but it is difficult to see how this could be compatible with safety. It has been suggested that cattle could be located by carrying a transmitter. Radio communication involves considerable distances, so the use of GPS/GNSS collars with transmitting systems would pose interesting power and battery problems.
3 Case Studies
In this section we present a variety of case studies where robotics technologies are being applied successfully to problems in the agriculture and forestry sectors. The presentation follows a somewhat loose categorization by application domain, i. e., field crops, orchard crops, protected cultivation, forestry, and livestock. The point here is to group examples that are domain-specific but allow for the discussion of those that straddle across domains. Within each domain, the case studies follow (again, loosely) the progression from sensing to mobility to manipulation.
3.1 Optimized Coverage for Arable Farming
Agricultural researchers and practitioners have long-desired to have the capability to follow well-defined traffic lanes with common-width equipment systems to minimize soil compaction effects on plant growth [56.18]. The advent of automation and control technology (e. g., GNSS, automatic steering) eliminated the need for complex processes to define the controlled-traffic lanes by making it possible for agricultural machine systems to follow precise paths spatially and temporally.
With the rapid adoption of automatic guidance systems, automated path planning has great potential to further optimize field operations. Field operations should be done in a manner that minimizes time and travel over field surfaces and is coordinated with specific field operations, machine characteristics, and topographical features of arable lands. To reach this goal, Jin and Tang [56.19] proposed an optimal coverage path planning (GlossaryTerm
OCPP
) algorithm where coverage is represented by a geometric model. To determine the full coverage pattern of a given field by using boustrophedon paths, it is necessary to know whether and how to decompose a field into sub-regions and how to determine the travel direction within each sub-region. The search mechanism is guided by a customized cost function resulting from the analysis of different headland turning types and implemented with a divide-and-conquer strategy. The complexity of the algorithm is for a field with n edges. In order to reduce the total turning cost, the number of turns needed to be minimized. Besides, turns with relatively high operational costs needed to be avoided. Fields of irregular shapes had inefficiencies related to headland turns when headlands were at an angle to the machine. Two-dimensional field examples with complexity ranging from a simple convex shape to an irregular polygonal shape that has multiple obstacles within its interior were tested with the OCPP algorithm (Fig. 56.2). The results show that, in the most extreme two-dimensional cases, OCPP saved up to 16 % in the number of turns and 15 % in headland turning cost. There were no cases where OCPP outputted solutions worse than those adopted by farmers.When optimizing coverage path over three-dimensional (GlossaryTerm
3-D
) terrains, more factors need to be considered, including headland turning cost, soil erosion cost, and skipped area cost. Jin and Tang developed an analytical 3-D terrain model with B-splines; and analyzed different categories of coverage costs on 3-D terrains and developed methods to quantify soil erosion cost and curving path cost corresponding to a particular coverage solution. Similar to the 2-D coverage path optimization, terrain decomposition and classification methods are used to divide a field into sub-regions with similar field attributes and comparatively smooth boundaries. The divide-and-conquer strategy was also applied in the 3-D terrain coverage planning. The most appropriate path direction of each region was the one that achieved the minimum coverage cost. A seed curve searching algorithm was successfully developed and applied to several practical farm fields with various topographic features (Fig. 56.3). The 3-D planning algorithm has shown its superiority on 3-D terrain fields compared with the 2-D planner. On the tested fields, on average the 3-D version saved 10.3 % on headland turning cost, 24.7 % on soil erosion cost, 81.2 % on skipped area cost, and 22.0 % on the weighted sum of these costs, where their corresponding weights were 1, 1, and 0.5, respectively. In one of the regions, in particular, the 3-D planning algorithm generated a result with only 30.5 % of the soil erosion caused by the 2-D planning algorithm. It was also observed that the skipped area resulting from the sharp turning curvature in 3-D planning is generally much smaller than the skipped area between paths when projecting 2-D planning results to a 3-D surface.3.2 Weed Control
Weeds compete with the production crop for light, water, and nutrients and can have a detrimental impact on crop yield if uncontrolled [56.20]. For these reasons chemical and mechanical weed control have long received attention of agricultural engineers. Compared to current methods, robotics offers the opportunity to improve this important production task in two ways: greater precision when done mechanically, and reduced emissions and environmental runoff when done chemically. This section describes three examples of the application of robotics to weed control; for an overview of the field see [56.20].
Broad-leaved dock is a troublesome weed in grassland. When not controlled it may reach a high density and considerably reduce grass yield. In response to a request of seventeen organic Dutch dairy farmers, a robot was constructed to detect and control broad-leaved dock in grassland [56.21] (Fig. 56.4) ( ). It consists of a custom platform with four independently driven wheels powered by a 36 kW diesel engine. The wheels are fitted with golf cart tires that provide traction on grass with minimal impact on the sod. Reduction gears ensure high torque, allow centimeter-precision forward movement, and limit maximum speed to a safe 3 mph. Skid steering was deemed sufficiently precise under the circumstances. Traversal of a large, featureless pasture was achieved by using real-time kinematic GPS (GlossaryTerm
RTK
-GPS) to follow a pre-defined path consisting of parallel segments connected by half-circle turns at the ends. Because weeds and grass are both green, a texture-based image analysis method was used to detect the former [56.22, 56.23]. The weeder proposed by Riesenhuber was used [56.24]. The method consists of a rod weeder driven vertically into the ground to fragment the weed’s taproot.Volunteer potatoes are the leftovers of the previous year’s harvest. After a mild winter these potatoes will sprout and constitute a serious weed. They not only compete with the production crop, but also potentially carry diseases. In The Netherlands, legislation requires that they be removed annually by July 1st. This labor-intensive task naturally called for automation. As a joint effort of industry, policy makers, and scientists, a project to detect and control volunteer potato plants was initiated [56.25, 56.26, 56.27]. A proof-of-principle machine for sugar beet fields has been built and tested (Fig. 56.5). Machine vision-based detection at precision was combined with a micro-sprayer with five needles and a working width of 0.2 m. The accuracy of the system was in the longitudinal direction and in the transversal direction. The main error source was the variability in micro-sprayer droplet velocity that caused longitudinal errors. Still, 77 % of volunteer plants with a size larger than were successfully controlled at velocities up to . Within the seed line, glyphosate was applied on weed potato plants, accompanied with up to 1.0 % unwanted killed sugar beet plants.
The Intelligent Autonomous Weeder (Fig. 56.6) is a four-wheel steer, four-wheel drive platform to be used for autonomous weeding operations in arable farming [56.28, 56.29, 56.30]. The platform combines dual GPS-based navigation with computer vision-based row following. The four wheel steering construction offers supreme maneuverability, which is not only an advantage when precise operation within the crop is considered, but also offers the opportunity of very compact head land turns.
3.3 High Precision Seeding
The premise behind this work is that, if an autonomous agricultural machine could accurately follow a predefined path to carry out seeding, then the same machine could drive in the field throughout the entire growing season carrying out subsequent tasks – e. g., weeding, fertilizing, spraying, etc. – without the need for repeated crop location sensing ( ). The primary challenge is to develop centimeter-level path tracking accuracy along straight runs by the mobile agricultural machines, and even greater accuracy (1–2 cm) in agricultural tool manipulation, e. g., controlling seeding tines for seed placement.
While there are semi-automated seeding systems currently on the market, they suffer from a number of problems with major implications for environmental sustainability, productivity, and economic return:
-
They are passive, i. e., unable to take action to correct the seeding tines’ path.
-
The seeding implements are towed by a tractor which is GPS-guided, but with insufficient accuracy (usually 40 cm); moreover, accurate path following by a tractor does not guarantee accurate seed placement by the implement.
-
The tractor does not sense deviation of the seeding implement’s path, causing the crop to be laid out unpredictably.
-
Even when the system senses implement deviation, it does not have the ability to correct the seeding implement’s path, let alone the position of the seeding tines.
-
Tractors used in current systems are massive, therefore must use fixed tracks to limit ground compaction (however still causing compaction of up to 20 % of arable land).
-
Insufficient seeding accuracy does not permit inter-row cropping in alternating seasons – a technique that utilizes remnant nutrients in inter-row spaces, and
-
Present day systems do not have fully autonomous operational capability.
Katupitiya et al. [56.31] built and delivered a system that advanced the field in the following ways (Fig. 56.7):
-
An active seeder equipped with sophisticated control systems that can take corrective action against path deviations while ensuring high accuracy seeding tine position control.
-
High accuracy localization of the seeder and tines achieved via precise GPS, high-precision sensors, data fusion, and control software to locate/position the seeding tines. Control systems include those that micro-adjust the seeding tines with respect to the main seeder frame. This allows the seeding tine positions to follow a more accurate path than the seeder itself.
-
The seeder is force-controlled and self-propelled, allowing for the size of the tractor to be greatly reduced. Smaller tractors also mean smaller wheels, so the tractor wheel width can be smaller than the inter-crop row width; hence the tractor can use the inter-row space as its wheel tracks without squashing the crop for subsequent operations. These result in the complete elimination of ground compaction.
-
The level of automation integrated into the tractor-seeder pair is such that the entire system is readily autonomous with no operator required.
-
The seeder is modular, with either single or multiple units operating around the clock. A special purpose tandem, non-linear adaptive pursuit path tracking algorithm is used to control the steering of the tractor and the seeder wheels. The seeder has its wheels under force control based on the tension at the off-axle hitch point.
3.4 Crop Yield Estimation
One common desire of all fruit growers is knowledge of the crop yield. Accurate yield prediction helps growers improve fruit quality and reduce operating cost by making better decisions on intensity of fruit thinning and size of the harvest labor force. It benefits the packing industry as well, because managers can use estimation results to optimize packing and storage capacity.
Typically, yield estimation is performed based on historical data, weather conditions, and workers manually counting fruit in multiple sampling locations. Manually gathering samples is a time-consuming, labor-intensive, and inaccurate process, and the number of samples is usually far too small to capture the magnitude of the variation in yield across each block. Growers are searching for an automated and efficient alternative that can accurately capture spatial variation in yield.
To deal with this need, Nuske et al. [56.6, 56.7] developed a computer vision-based system to detect and count fruit. The system uses a camera rig for image acquisition, working at nighttime with controlled artificial lighting to reduce the variance of natural illumination. An autonomous vehicle is used as the support platform for automated data collection (Fig. 56.8). The system scans both sides of each row of trees or vines, detecting fruit captured within the image sequence, and then generates yield estimates.
The accuracy of the system was demonstrated by comparing its crop yield estimation with ground-truth recorded via careful and tedious manual measurements. Its end result is a yield map that closely resembles the true spatial distribution of yield, which growers can utilize to make critical production management decisions.
The yield estimation system was deployed in a number of vineyards, apple orchards, and strawberry ranches, and the results show that the system works well in a variety of crops and training structures (Figs. 56.9 and 56.10).
3.5 Precision Irrigation
Current irrigation practice within the agricultural community often dictates over-watering crops as opposed to under-watering. This results in wasted resource, increased leaching of fertilizers, and an increase in crop disease. There are crop models that determine adequate irrigation quantities, but they are seldom used as the constant modifying of irrigation parameters is tedious and difficult.
Kohanbash et al. [56.32] used wireless sensor networks (GlossaryTerm
WSN
’s) to monitor environmental conditions in real time and adjust irrigation parameters on the fly. WSN’s can do basic set-point control where irrigation is enabled every time the soil moisture goes below a pre-defined threshold. More advanced control methods are also possible by integrating crop water use models into the system. WSN systems communicate with a central base station that can be connected to the Internet, allowing remote access for viewing crop condition and modifying irrigation settings. Having a central base station also allows growers the ability to monitor and analyze the crop growing environment for trends and long-term changes.Early results (Tab. 56.1 ) show water savings of up to 75 %, fertilizer leaching reduced to near zero, increased crop quality, speed of crop growth increased by over 50 %, and reduced occurrence of crop disease. In addition to these savings, growers can use the data from WSN systems to tailor their crop for specific markets. For example, by adjusting the irrigation set-point value, a grower can choose between more expensive (Grade A) product, or less expensive (Grade B) product that they can sell more of.
3.6 Tree Fruit Production
Autonomous orchard vehicles will radically transform tree fruit production by automating maintenance operations such as mowing and spraying and augmenting workers pruning, thinning, and harvesting. From a robotics point of view, these applications can be realized with a relatively simple, yet challenging, capability: driving along one row of trees, turning at the end of the row, and entering the next one. Challenges involved include reliably sensing the trees in the presence of sloped terrain, branches, tall grass, and missing trees; localizing the vehicle in the orchard; following trajectories inside and outside the rows; and avoiding obstacles. Additionally, the added cost of the autonomy components should be as low as possible, to make such vehicles commercially viable.
Figure 56.11 shows a family of autonomous orchard vehicles with a common sensing and computing infrastructure that allows them to cover entire orchard blocks continuously for several hours ( ). To keep cost low, they are not equipped with GPS/GNSS or inertial navigation systems; instead, perception and navigation is achieved with a laser rangefinder mounted on the front of the vehicle and steering and wheel encoders. Table 56.2 summarizes the three autonomous control modes in which the vehicles can operate.
From 2008 to 2012 the vehicles drove a combined 350 km in experimental and commercial orchards, vineyards, and nurseries in several US states, including many in the largest apple producer in the United States, Washington state. The longest run covered 25 km over five hours [56.10, 56.33]. Time trials conducted by The Pennsylvania and Washington State Universities Extension educators showed that workers onboard an autonomous orchard platform can be twice as fast as workers using ladders or on foot when working on the top portion of apple and peach trees (Fig. 56.12) [56.34].
3.7 Vehicle Formation Control
Multiple robotic agricultural vehicles driving in formation offer capabilities beyond those possible by a single vehicle. Lenain et al. [56.35] developed a control architecture that enables accurate and stable control of several robots in a given and possibly variable formation. The method is based on a path tracking framework, defining the relative robot’s positions as a lateral distance with respect to a given path, and the longitudinal distance along this path. Perturbations due to motion on natural environments (e. g., poor grip conditions and uneven terrain topography) are compensated with a nonlinear model-based controller that includes an observer for wheel sideslip. In the formation control point of view the lateral error is no longer regulated to zero but to a desired set-point, potentially variable and accounting for other vehicle deviations. In addition, the velocity of each vehicle is controlled in order to ensure a desired curvilinear distance with respect to the others, and imposing a desired speed for one of them. Similarly to lateral dynamic, the desired distance may be defined as varying, and error may be constituted of a mix between longitudinal errors of one vehicle with respect to the other.
This approach was tested in the field with two electrical off-road robots (Fig. 56.13a). They are equipped with RTK-GPS providing an accuracy of , and are able to communicate their positions among each other using wireless communications. Weighing around 500 kg each, these two-meter long vehicles are able to reach speeds of and climb up to 20 degrees terrain. In this example, the first vehicle has to follow the trajectory depicted in black, composed of a straight line along a 15 degree slope on grass at a speed of , followed by a half turn coming back on flat ground. The second vehicle must follow the first one with a desired curvilinear distance of 10 m and a desired lateral deviation of 1 m. The positions of the first and second vehicles are shown in blue and red dots in Fig. 56.13b, respectively. In these conditions, the accuracy of relative positioning is within despite the challenging terrain conditions (low grip, uneven terrain, curved trajectory). This work demonstrated the capability of maintaining a formation of several vehicles, compatible with the conditions encountered in agriculture, allowing to consider the introduction of multi-vehicle platoons in the field for precision agriculture.
3.8 Date Palm Tree Spraying
Dates palm tree spraying is normally done manually by a team of three workers from a platform 18 m or more above the ground. In the past, many accidents have occurred due to instability of the platform when in a lifted position. Degania Sprayers Company (Israel) developed a sprayer with a tall air cannon and a pan-tilt unit at the end to control the air flow and spray direction. This system, however, requires a worker to manually aim the spray toward the tree. Shapiro et al. [56.36] developed an automatic tree tracking system for the sprayer (Fig. 56.14).
The tracking system is based on an ultrasonic range sensor to detect the trees and a proximity sensor mounted on the sprayer’s wheel to measure distance traveled. A human driver is responsible for maintaining the sprayer a distance 3.5 m from the trees. Knowing the average distance between trees in the row (9 m), it is possible to compute the desired spraying angle as , where x is the distance traveled since the last tree (Fig. 56.15). This value is fed fed into a proportional–integral–derivative (GlossaryTerm
PID
) controller that outputs a pulse-width modulation (GlossaryTermPWM
) signal to an electrically-controlled hydraulic valve that controls the spraying angle. The authors opted to use an on/off valve driven by a PWM signal instead of a proportional valve to reduce the system cost and therefore make it more attractive to farmers. The rotation angle of the sprayer is measured by a potentiometer and used as feedback for the PID controller. When the sprayer reaches the midpoint between two trees, i. e., , the sprayer is set to rotate toward the next tree in the row and start spraying it. The tracking algorithm is implemented on an Arduino microcontroller.The system was built and deployed in date farms, showing good tracking results. It replaces a human worker which previously had to perform a dull task controlling the sprayer to track the trees using a joystick. A preliminary economic analysis indicates that the additional cost of the sprayer tracking system can be recovered in one harvest season.
3.9 Plant Probing
In large botanic experimentation fields, treatments (watering, nutrients, sunlight) that optimize certain desired aspects (growth, appearance) need to be determined. Towards this aim, experiments entailing many repetitive actions need to be conducted. For example, measurements and samples from leaves must be taken regularly and some pruning may need to be performed. For these tasks robots would be very handy, but difficulties arise from the complex structure and deformable nature of plants, which do not only change appearance through growing, but whose leaves also move on a daily and sometimes hourly basis. Even though recent advances in depth sensors, deformable object modeling, and autonomous mobile manipulation have brought this goal in reach for robotic applications [56.37, 56.38], many problems still exist, in particular concerning robust recognition and localization of plant parts (leaves, flowers, fruits, stems), and robot manipulation under weakly constrained conditions in natural environments.
In this context, the European project GlossaryTerm
GARNICS
(gardening with a cognitive system) aims at 3-D sensing of plant growth and building perceptual representations for learning the links to actions of a robot gardener. Plant sensing and control is addressed by combining active vision with appropriate perceptual representations, which are essential for cognitive interactions ( ).An application of robotized phenotyping related to this project is the accurate placing of a measurement tool on a leaf in order to either cut sample disks from the leaf, or to measure chlorophyll content. The robotic arm is equipped with a time-of-flight (GlossaryTerm
TOF
) camera and a measurement tool (Fig. 56.16) [56.39, 56.40, 56.41]. In this approach, image segmentation and model fitting are employed to recognize and localize single leaves from depth information. 3-D data is combined with color or infrared images and used to segment the data into surface patches, which are assumed to correspond to actual plant leaves [56.42].In this approach, a next-best view strategy was proposed for finding a non-obstructed and frontal view of the leaf [56.41]. Initially, the robot arm is moved to a position from which a general view of the plant is obtained. The depth and infrared images acquired from this position are segmented into their composite surfaces, leaf model contours are fitted to the extracted segments, and the validity of the fit and the graspability of the leaf are measured (Fig. 56.17). A target leaf is selected and the robot moves the camera to a closer, frontal view. The validity of the target and the graspability are re-evaluated. If the leaf is considered to be suitable for sampling based on these criteria, the probing tool is placed onto the leaf following a two-step path. If the target is not considered suitable for probing, another target leaf (from the general view) is selected and the procedure is repeated.
The method is based on several assumptions:
-
i
The boundaries of leaves are visible in the infrared-intensity image
-
ii
The leaf surfaces can be modeled by a basic quadratic function
-
iii
Leaves of a specific plant type can be described by a common 2-D contour
-
iv
Leaves are large enough to allow analysis by a ToF camera, and
-
v
The leaves are static during probing.
These assumptions may be violated under certain conditions, and further research will have to be undertaken to solve the various problems originating mostly from the complex and deformable nature of plants.
3.10 Cucumber Harvesting
A prototype robotic harvester for cucumbers is presented in Fig. 56.18. The machine consists of a mobile platform that runs on rails ( ). These rails are commonly used in greenhouses in The Netherlands for the purpose of internal transport, but they are also used as a hot water heating system of the greenhouse. Harvesting requires functional steps such as the detection and localization of the fruit and assessment of its ripeness. In case of the cucumber harvester, the different reflection properties in the near infra-red spectrum were exploited to detect green cucumbers in the green environment [56.43, 56.44, 56.45]. Whether the cucumber was ready for harvest was identified based on an estimation of its weight. Since cucumbers consist almost 95 % of water, the weight estimation was achieved by estimating the volume of the fruits. Stereovision principles were then used to locate the fruits to be harvested in the 3-D environment. For that purpose the camera was shifted 50 mm on a linear slide and two images of the same scene were taken and processed. A Mitsubishi RV-E2 manipulator was used to steer the gripper-cutter mechanism to the fruit and transport the harvested fruit back to a storage crate. Collision-free motion planning based on the A algorithm was used to steer the manipulator during the harvesting operation [56.44]. The cutter consisted of a suction cup on a parallel gripper that grabbed the peduncle of the fruit. (The peduncle is a stem segment that connects the fruit to the main stem of the plant.) Then the action of a suction cup immobilized the fruit in the gripper. A special thermal cutting device was used to separate the fruit from the plant. The high temperature of the cutting device also prevented the potential transport of viruses from one plant to the other during the harvesting process. For a successful harvest this machine needed 65.2 s on average. The success rate was 74.4 % [56.43].
3.11 Cucumber Leaf Removal
Harvesting has received considerable attention in robotics research focused on protected cultivation. Yet, this is not the only time- and labor-consuming cultivation operation. In cucumber production, amongst others, removal of old non-productive leaves in the lower regions of the plant is a time-consuming task. Figure 56.19 shows the same platform used for harvesting, but in this case hardware and software were used for removal of leaves from the plants [56.16] ( ). In this system, the camera system is used to identify and locate the main stem of the plants. The gripper is sent to the plant and moved upwards. Leaves encountered during this upward motion are separated from the plant using the similar thermal cutting device as used for harvesting. An interesting feature of this machine is that with slight modifications of software and hardware, two greenhouse operations could be performed.
3.12 Rose Harvesting
In recent years a harvesting robot for roses has been developed and tested under practical circumstances in The Netherlands. The prototype is shown in Fig. 56.20. In this case, the rose plants are grown on moveable benches. Thus plants move to the robot instead of the robot moving to the plants in the greenhouse. During the harvest cycle, a camera system travels over the rose plants and locates the roses to be harvested. Then the harvesting operation is performed with two manipulators. One manipulator grips the rose just below the flower and pulls it gently aside to generate space for the second manipulator to travel down the stem towards the point where it will cut the stem. This manipulator carries a small-sized stereovision system that is used for real-time tracking of the stem during this downwards motion. Upon arrival, the manipulator deploys a small scissor-type cutter that will cut the stem. Then, finally, the rose is pulled out and put in a storage by the first manipulator whilst the second manipulator moves out of the crop and proceeds with the next harvest cycle. Details of this system have been covered in [56.13].
3.13 Strawberry Harvesting
In Japan, the market of strawberries is large, as large as the market of tomatoes, cucumbers, and mandarin oranges. The potentially high-economic return of this product together with the high labor intensity of processes like harvesting, explains the long tradition of research on robotic strawberry harvesting. As shown in Fig. 56.21, the robot consists of a 4 degrees-of-freedom (GlossaryTerm
DOF
) cylindrical manipulator. The robot carries 3 charge-coupled device (GlossaryTermCCD
) cameras. A square LED-array is used for illumination of the scene. Two cameras provide stereo vision for detection and localization of the fruits. Once a fruit is detected, the end-effector is positioned in front of the fruit. The third camera, mounted on the end-effector, is then used to detect the peduncle, i. e., the fruit stem, and calculate its inclination. Based on this data, the orientation of the end-effector is modified with a tilt mechanism and it then approaches the fruit. A successful approach of the strawberry is detected by a reflection-type light sensor in the end-effector. Upon successful completion of this motion, the peduncle is grasped and the stem is cut with a scissor-type cutting mechanism. A suitable manipulator motion then sends the harvested strawberry to a tray. This procedure is repeated for all detected fruits at the current position of the robot. After all picking attempts are completed the full robot platform is moved 210 mm with a gantry-type transportation system running below the strawberry benches. The current prototype has achieved a picking speed of 6.3 s with a success rate of 52.6 %. For more details refer to [56.46, 56.47].3.14 Pot Handling in Nurseries and Greenhouses
Nursery and greenhouse (GlossaryTerm
N&G
) farms in the United States produce over two billion potted plants annually (Fig. 56.22a). During the course of production plants are moved several times – distributed onto indoor or outdoor growing beds, repositioned to recover space as orders are filled, and collected for bulk transport. Until recently, only scarce manual labor was suitable for these jobs (Fig. 56.22b).Harvest Automation’s recently-marketed HV-100 (Harvey) robot automates critical N&G tasks (Fig. 56.23a). Plants are lifted and transported using a one degree-of-freedom manipulator coupled with a one degree-of-freedom gripper. The mobility system employs two differentially-controlled drive wheels balanced by a front roller.
Harvey uses a laser rangefinder to identify the containers in which plants are grown. This sensor has a horizontal field-of-view greater than 180 degrees and can detect poorly reflecting plant containers from at least 4 m – even under bright sunlight. The robot also uses the laser rangefinder to detect obstacles and robot teammates.
Four boundary sensors, two forward-pointing, two rear-pointing, are used to find and follow retroreflective tape that marks the edge of the bed. The tape marker performs double duty as both the robot’s global reference and as part of the user interface. By positioning the boundary marker workers indicate to the robot where plants should be placed.
A user interface consisting of a dial and buttons is located on the back of the electronics box. The interface enables users to input the desired plant spacing, bed width, spacing pattern (hexagonal or rectangular) and the number of aisles the robot should instantiate.
Robust operation of the robot is enabled by a behavior-based programming scheme. Figure 56.23b illustrates spacing, a task that gives each plant sufficient room to grow without interfering with its neighbors. The closely packed plants in the foreground have been delivered to the growing bed and placed on the ground. Robot A identifies plant containers using the rangefinder; it selects for pick up the container farthest down field. After closing on and capturing a container with its gripper the robot will turn toward the boundary B. Nearing the tape marker the two forward boundary sensors on the robot will detect the tape and compute the relative angle between robot and tape. This enables the robot to turn and align with the tape, keeping the tape on its left side.
After acquiring the boundary marker the robot will servo along the tape as robot C is doing. When the plant destination comes within view the laser rangefinder is used to identify the next empty space in the pattern of spaced plants. The robot then computes an efficient path to the put-down point, moves to that position, and drops off the plant. Afterward, it turns back toward the source of plants and repeats the process. The robot uses dead reckoning to find its way to the vicinity of the source plants. Following this strategy robots can work singly or in teams of various sizes.
3.15 Precision Forestry
About 30 % of the land mass of the earth is covered by forests. In addition to providing the raw materials for furniture, paper, clothing, and heating, forests provide habitats to diverse animal species and form the source of livelihood for many different human settlements. In 2012, in Germany alone, an estimated 1.3 million jobs in the wood processing industry generated a revenue of more than 180 billion euros. To explore forests in an environmentally-friendly but still economical way is thus a major issue. Robotics is currently being used to preserve the forest and secure the jobs in forestry and related industries.
Today, work in forests is already highly mechanized, and in the last decade, mobile robotics know-how combined with new virtual reality and remote sensing techniques paved the way for a new robotics view onto work machines in the forest. Wood harvesters (Fig. 56.24) and forwarders, advanced work machines for log cutting and transport, are currently a major aim of automation efforts [56.45, 56.48, 56.49]. Based on the insight that accurate machine localization cannot solely be based on GPS, mobile robotics capabilities for localization and navigation have been introduced. Measured GPS errors of up to 50 m due to signal absorption in the canopy and to multi-path effects make GPS practically useless for precise localization and navigation. In the VisualGPS approach, the GPS position only serves as a starting point for a combined Kalman filter and Monte Carlo localization algorithm based on optical range measurements by laser scanners ( ). The approach yields a machine position with an accuracy of 0.5 m and thus provides a sound basis for the development of navigation and (semi-) autonomous logging procedures [56.50, 56.51].
Practical experiments show that map building based on simultaneous localization and mapping (GlossaryTerm
SLAM
) techniques [56.52] are not very applicable in these environments, because the resulting errors in the map are not bounded and thus are not well-suited for large area operation and for a matching against parcel borders. Instead, the highly accurate position estimation from VisualGPS builds on a previously generated map of single trees. A multi-sensor fusion approach helps to build this map based on airborne and satellite imagery in multiple spectral ranges as well as on airborne laser scanning. The multispectral imagery data provides the basis for the tree species determination which is solved as an advanced pattern classification problem [56.53]. The next step, single tree delineation, is based on airborne imagery in combination with 3-D surface data from the laser scans. A modified watershed algorithm [56.54] delineates the tree crowns. From the tree crowns’ sizes and species information, the geo-referenced stem position and even the trees’ diameters can be deduced. This not only allows for generating a global, geo-referenced tree map, but is also sufficient for generating a semantic world model, the virtual forest (Fig. 56.25).Each work machine in the forest also builds local maps of visible trees using machine-mounted laser scanners and a compass. For localization purposes, these local maps are matched against the previously generated global map of trees by means of a particle filter. For a moving machine, the prediction step is carried out by a Kalman filter [56.50, 56.51].
Figure 56.25 shows the two important results of the global map building process. The figures on the left show just the tree map with the white noise in the upper image denoting the particles representing potential poses for the Monte Carlo localization process. In the lower image, the particles have converged against the machine’s correct position. The right image gives an impression of the virtual forest generated from the same data that was used for the tree map. This virtual forest is used to visualize the deduced information (e. g., tree species, tree height, crown shape and size, etc.) in a high-end virtual reality representation. This approach follows a general trend in robotic applications for natural or space environments: the development of virtual testbeds for world model information visualization in an intuitively comprehensible manner. Advanced virtual testbeds then provide the simulation capabilities to develop and test the robotic application against the virtual world in order to save time and costs [56.55].
The developed laser rangefinder-based VisualGPS algorithms are being enhanced by optical stereo image recognition capabilities and are being ported to mobile platforms like the Seekur Jr. This is due to the fact that the work that started as an automation effort for work machines in the forest has recently turned into an environment monitoring project which aids the forester to inspect, protect, and attend to the forest with increased efficiency and effectiveness.
3.16 Semi-Automation of Forwarder Crane
In cut-to-length harvesting, which is the predominant tree harvesting method in Europe, delimbed and bucked timber is collected from the forest to the roadside by a forwarder. For an operator of such a machine, a large part of the work cycle consists of maneuvering the onboard hydraulic crane (Fig. 56.26). The operator controls the links individually using two joysticks. The redundant kinematic design is necessary for dexterity and the large active workspace of the machine, but it also makes the crane control difficult to learn and to perform in an efficient way.
Considering the technological progress and capabilities of today’s forestry machines, the current manual control approach creates bottlenecks in the forwarding process. Semi-automation of these operations would thus be beneficial for productivity. Automation of some repetitive motions is also desired in order to support the operator and to reduce the physical and cognitive workload.
To this end, a small-scale hydraulic forwarder crane has been installed in a laboratory environment [56.56]. The crane is equipped with position and pressure sensors, as well as with electronics and software for rapid prototyping of automated control strategies. The same equipment has been installed on a commercial Valmet 830 forwarder by Komatsu Forest for field testing.
Using these platforms, new feedback control methods and trajectory planning procedures have been developed to construct and implement time-efficient motions [56.57]. For a given geometric path, the speed and relative use of the different crane links along this path can be shaped to achieve motions with optimal performance within the limitations of the machine. Velocity constraints for the individual joints are particularly restrictive in hydraulic manipulators.
In [56.58], trajectories planned with this approach are compared with recorded motions by professional human operators with respect to execution time. Results show that the performance can be improved significantly by path-constrained replanning of human-operated motions (Fig. 56.27). Additional replanning of the geometrical path, along with efficient velocity profiles along the path, can further improve the time efficiency of the crane motions.
Tracking the trajectories with feedback control requires sensors for measuring the joint positions. Installation of such sensors may not be desired due to high costs or durability problems in the rough outdoor environment. Using the approach in [56.59], motions can be found that are robust to uncertainties in initial conditions and therefore possible to perform in open loop. The end effector position along the trajectory can also be estimated using an observer and signals available from pressure sensors. Such design can be a more convenient solution in the harsh conditions of the forest.
The introduction of new tasks or new ways to perform tasks means new challenges for human-machine interaction (GlossaryTerm
HMI
). Semi-autonomous operation requires the automated parts to be well integrated with the manual work. In order to facilitate cooperation in manipulator control between the human and computer, the work distribution and work transitions between them is important. Hansson and Servin [56.60] describes an implementation of shared and traded control for this setup. The results from user tests show that semi-autonomous operation reduces the workload and shows significant potential to improve the productivity of inexperienced operators.A reliable framework of sensors and low-level control, along with efficient motion planning strategies, allows for development of more advanced interaction technologies. One such possible future scenario is teleoperation, that presents several advantages to both machine owners and operators. Firstly, it opens up for redesign of the machine with removal of the cabin which saves weight and cost. Secondly, the working environment is improved with reduction of the noise and vibration levels. A virtual environment-based system for teleoperation of a forwarder crane was demonstrated in [56.61]; follow-up work is investigating how a virtual environment can be a useful tool for presenting feedback to the operator in different scenarios. This method does however require environment reconstruction to locate logs and potential obstacles, both of which are considered in [56.62].
3.17 Livestock Breeding and Nurturing
In some cases animal breeding is a proactive process, such as in hatcheries where optimum conditions must be managed. In others nature is allowed to take its course. Sometimes intervention may be needed in the birthing process, but in the Australian outback cattle must take their chances. Nevertheless the newborn must be marked with transponder tags under the national livestock identification scheme, GlossaryTerm
NLIS
[56.63].Other species such as kangaroos, feral pigs, wild camels and horses can breed and run wild until harvested or culled. When creatures are confined, tasks include environmental monitoring and control, feeding, cleaning and growth monitoring.
A Dutch company, Lely [56.64], is active in marketing products that have a strong robotic element. When cattle are located in stalls, the Juno [56.65] patrols the laneway at the side of the pen, pushing fodder towards the rails to put it in reach of the feeding cattle (Fig. 56.28).
The Vector [56.66] goes even further. A mobile robot carrying a hopper, it navigates automatically between the barn that holds the feed and the stalls where the cattle are waiting (Fig. 56.29).
Other equipment in Australian feedlots estimates the weight gain of the cattle, such as a system that reads the NLIS tag of a beast at a water trough and records the weight imposed by the front legs.
In the Australian outback, similar sensing systems perform a variety of functions. Water is scarce, so watering points can be fenced, giving access only via a laneway that can be monitored. Some earlier projects [56.67] have involved species identification to control access by means of an automatically operated gate, but similar technology has valuable use in cattle production.
Again the NLIS tag is read, while a walk over weighing system notes the weight gain of the animal. Vision can be used to identify cows that are followed by a calf. Now automatic gates can direct these to a separate paddock for tagging the calves.
Many of the opportunities are for automation, which may be a precursor for robotics. Several projects have involved machine-vision monitoring of cattle [56.68] or of pigs in piggeries [56.69]. The weight-gain is estimated by visual means using precision cameras (Fig. 56.30). Other analyses concern behavior [56.70].
Mobile robots can be involved in sheds where chickens are bred for meat, rather than laying. A mobile robot has been proposed for moving among the poultry to monitor air quality including temperature, relative humidity and the concentrations of ammonia and dust.
Australian honeybees must also be protected from pests, carried by bees on arriving ships. Bait boxes are deployed at ports around Australia to attract any such swarms. A remote monitoring system with camera sensing gives an early alert to the apiary officer [56.71].
3.18 Livestock Exploitation
Traditional farming practices milk cows twice a day; in morning and in the evening. Automatic milking parlors with robotic milking stations are being widely adopted to improve diary productivity and convenience (Fig. 56.31). These allow the cows to self-determine when to come in for milking and feeding. In an automated milking station, teats are located and attachment of milking systems is automatic. Yield is monitored and udders are automatically inspected for injury and disease. Human intervention is minimal. The Lely company is active in this area [56.72].
Once a year sheep must be shorn of their wool. In the past, the University of Western Australia developed an automatic shearing system known as the Shear Magic [56.73] (Fig. 56.32). The commercial exploitation was not a success and there seems to have been little activity in this area in recent years.
Eggs must be collected from battery hens. As they are laid, they roll from the cages onto a conveyer belt. Among other robotic applications in egg production, machine vision is used to detect foreign bodies or damaged eggs [56.74] (Fig. 56.33).
3.19 Livestock Harvesting, Slaughtering, and Processing
The system already described for monitoring cattle that approach a waterhole is also the subject of a funded project for mustering. Cattle, selected by means of their NLIS tags, are diverted into a compound from which they can be collected for transport to a feedlot, in preparation for slaughter [56.75] (Fig. 56.34).
The vision-based system that can control access to waterholes can also be used for the collection of feral animals such as pigs. There are said to be more wild camels in Australia than in Arabia and these too can be captured by such a system.
At the start of the 1990s, Fututech was hailed as the future of slaughterhouses [56.76]. Robotic systems to be installed at the Kilcoy pastoral company in Queensland would automate the whole process from the knocking box to the chillroom. Construction started in 1992, but by June 1994 the project was abandoned, at a cost of over $40 million [56.77]. It was a project that was ahead of its time, relying on a central computer and kilometers of cabling rather than distributed intelligence.
But much is different now. Robots have become commonplace in abattoirs, supported by sophisticated sensing systems to locate skeletal features [56.78] (Fig. 56.35). Pigs probably outnumber cattle and the processed animals include sheep and goats. Deboning systems are marketed for products including chicken legs and ham bones [56.79, 56.80].
For slaughtering quantity, the lead must be held by poultry, where a single plant can process up to 4000 birds per hour [56.81].
3.20 Aerial-Based Precision Agriculture
Unmanned aerial vehicles (GlossaryTerm
UAV
s) have recently begun being applied to precision agriculture. This is a new development and results are still preliminary, albeit very promising ( ). In fact, the Association for Unmanned Vehicle Systems International forecasts that 80 % of all UAVs sold in the United States in the 2015–2025 period will go to serve the agricultural market [56.82]. Even if the actual figure turns out not to be that high, there is no question that roboticists and agricultural engineers are investing significant time and resources to understanding how UAVs can improve agricultural efficiency and reduce costs.Dong et al. [56.83] use high-resolution images and on-board sensor data from an aerial vehicle (Fig. 56.36) to create a sequence of dense 3-D reconstructions of the crop over time. This four-dimensional (GlossaryTerm
4-D
) spatio-temporal reconstruction is used to segment the canopies and estimate how the crown radius and height of each plant evolve. The processing pipeline was tested on data collected in a field planted with corn, broccoli, and cabbage in Tifton, GA, USA (Fig. 56.37).In many potato growing areas, the potato crop does not senesce naturally and therefore tuber maturation is artificially induced by killing the haulm 10 to 25 days prior to harvest. Reglone is a widely-used potato haulm killing herbicide. Building on the knowledge of the relationship between potato crop biomass status, expressed in weighted difference vegetation index (GlossaryTerm
WDVI
) [56.84], and the minimum effective dose of Reglone, Kempenaar et al. [56.85] successfully demonstrated the use of crop biomass imaging with a multi-spectral camera underneath an unmanned aircraft (Fig. 56.38) for variable rate application. The WDVI map was converted into a dose grid map (Fig. 56.39) adjusted to the boom width of the field sprayer. On the field, the average use was 0.9 liters of Reglone per ha, with satisfactory efficacy. Standard practice would have been the use of 2 liters per ha. The use of unmanned aerial imaging thus prompted a savings of more than 50 %, without loss in potato haulm killing efficacy, harvestability of the potatoes, and final product quality.Wheat is the most widely grown arable crop in the UK, covering around 1.6 million hectares and producing 11.9 million metric tons in 2013. As with other crops, wheat has its share of weed competitors, one of which is black-grass. This prolific weed is highly competitive and increasingly resistant to chemical control, making it one of the biggest challenges facing UK’s agricultural sector.
Researchers from URSULA Agriculture have demonstrated a system that combines multispectral sensing with UAVs to capture imagery at 10 cm resolution (Fig. 56.40) [56.86]. Using the unique spectral and morphological properties of the black-grass, they developed classification processes that are applied to the image to automatically delineate the black-grass from the host crop.
For the farmer and agronomist, the output is supplied in the form of visual maps and spatially attributed data points that are transferable through farm planning software to precision guided machinery (Fig. 56.41). URSULA Agriculture’s imagery analysis is able to not only identify the location and in-field extent of black-grass infestation, but also the density and area of black-grass plants, all of which will have a bearing on control decisions. The information provides the farmer with vital data to help contain black-grass infestation in the current growing season. Crucially, it also helps with long-term control by informing management methods such as higher seed rates to increase competition in black-grass hotspots, alternative cultivation techniques, or adjusting cropping strategies, all of which can help control black-grass alongside chemical herbicides.
4 Conclusion
The robotics community has made great strides toward introducing robotic systems in agriculture and forestry. The next ten years will bring improved sensing and mobility, and some advances in manipulation for crop production.
In orchard crops, there is still a need for improvement in algorithms that calculate crop yield and canopy volume or that automatically detect insects and disease. Here the breakthroughs may end up coming from physicists and engineers producing novel imaging systems rather than from roboticists. The continued conversion of existing orchards to the tree wall architecture means that autonomous orchard vehicles and platforms will soon have access to the vast majority of the fruit production acreage, at least in developed countries.
Manipulation in orchard environments is a growing and challenging field. The long-sought harvesting robot is still an object of fiction, especially when it comes to its economic feasibility. In the apple industry, for example, the best human workers are capable of picking 40 to 60 pieces of fruit per minute, keeping bruising down to a few percent of the volume picked. Our community is relatively far from achieving this level of performance, but we do not need to get there in one shot. Many value-added operations, including pruning, thinning, and harvesting, could benefit from an automated manipulation-based solution. Manipulation should be introduced into low-precision tasks first, where economic gains can be more readily realized, before the technology is refined and directed at tasks such as apple picking. One example is automatic grape shoot and fruit thinning according to each individual vine’s crop load [56.87]. This is in contrast to today’s methods, where crop load management is conducted uniformly over the entire vineyard, leading to significant variations in yield. Achieving vine-level manipulation in an economically sound way would help move the grape industry in the direction of variable rate management, so that one day, each vine, leaf, and fruit will be treated individually according to its own needs of water, nutrients, light, etc. [56.88].
Despite more than thirty years of research, robotic systems for plant maintenance operations in protected cultivation have not become commercially available. Clearly, the complexity of the operation is an issue. This is due to the highly unstructured working environment, the inherent natural variability of the crop, and the unfavorable environmental conditions, e. g., strong variations in lighting. In this domain, robust sensing and perception is key to successful deployments. This includes aspects such as detection of fruits, stems, and leaves, determination of their properties, e. g., ripeness, and, finally, their accurate localization in the 3-D working environment. Object detection can be quite a challenge as in some cases it boils down to finding a green object in a green background. When color differences are more pronounced detection can be easier, but still in many cases object occlusion poses another problem. This does not only prevent proper detection of the fruits, but also adds difficulty in the proper assessment of their properties.
In the cluttered environments of protected cultivation, manipulation is also quite challenging. While performing a task, the robot should prevent damage to the crop as this will immediately reduce its value. Still, the robot should be allowed to hit objects such as leaves. Humans tend to have quite intensive contact with the crop during operations. To mimic this kind of behavior requires knowledge about whether objects should not be touched at all or whether they might be gently touched and pushed aside, employing a kind of compliant motion.
Generally speaking, robotics-based crop production can be seen as an instantiation of the hand-eye coordination problem, or the integration of effective sensing, perception, and intelligent manipulation; in other words, those aspects of a well-trained human that are hard to reproduce. Development of technology to mimic human behavior is one way to go, but it will always be limited by the environmental conditions found at the time the robot was introduced. A systemic approach where the crop production system is designed top-down with the robotic system as just one of its components is a much more promising route, yet one that is still in its infancy. For example, growing systems can be modified to make detection and the approach of fruits and flowers easier, thus allowing for more simple design and operation of automated systems. Likewise, cultural practices may be modified to include the robot as part of the task [56.89, 56.90]. Human-robot interaction might be an interesting and cost-effective intermediate step – allowing for human guidance and supervision only when the machine needs assistance. Such collaborative approaches facilitate data collection in real working environments, offering the opportunity for learning and improving algorithms, thus paving the way for fully autonomous operations in the future.
Another important aspect of the successful introduction of robots in agriculture and forestry is that related to the socio-economic barriers to adoption. In addition to being able to execute a task correctly, the robot must do it in a cost-effective way. Such efficiency can only be demonstrated via numerous field trials, which are time-consuming and costly. Furthermore, the robotic system must have a total ownership cost, including acquisition, maintenance, user training, and disposal, that is less than the financial gains leveraged through its introduction in the production process. Finally, one fundamental aspect is safety. Not only must the hardware and software be designed and validated based on explicit safety requirements, there must be standards and regulations that dictate how and when robots and humans can interact. For an example of a study on the socio-economic barriers to adoption in the apple industry, see [56.91].
The mechanization enabled by the Industrial Revolution and the automation enabled by the information technology era have revolutionized agricultural production to the point where a single farmer can produce grains to feed one hundred people. In the next fifty years, we will witness a similar occurrence in fruits, vegetables, and other crops, thanks to the advances in robotic technologies that our community is continually developing for and applying to agriculture and forestry ( ).
Abbreviations
- 2-D:
-
two-dimensional
- 3-D:
-
three-dimensional
- 4-D:
-
four-dimensional
- A&F:
-
agriculture and forestry
- CCD:
-
charge-coupled device
- DOF:
-
degree of freedom
- GARNICS:
-
gardening with a cognitive system
- GLS:
-
global navigation satellite system
- GPS:
-
global positioning system
- HMI:
-
human–machine interaction
- N&G:
-
nursery and greenhouse
- NLIS:
-
national livestock identification scheme
- OCPP:
-
optimal coverage path planning
- PID:
-
proportional–integral–derivative
- PWM:
-
pulse-width modulation
- RTK:
-
real-time kinematics
- SLAM:
-
simultaneous localization and mapping
- TFP:
-
total factor productivity
- TOF:
-
time-of-flight
- UAV:
-
unmanned aerial vehicle
- WDVI:
-
weighted difference vegetation index
- WSN:
-
wireless sensor network
References
J.A. Foley, N. Ramankutty, K.A. Brauman, E.S. Cassidy, J.S. Gerber, M. Johnston, N.D. Mueller, C. O'Connell, D.K. Ray, P.C. West, C. Balzer, E.M. Bennett, S.R. Carpenter, J. Hill, C. Monfreda, S. Polasky, J. Rockström, J. Sheehan, S. Siebert, D. Tilman, D.P.M. Zaks: Solutions for a cultivated planet, Nature 478, 337–342 (2011)
A. Alleyne: Editor's note: Agriculture and information technology, Natl. Acad. Eng. Bridge, Issue Agric. Inf, Technol. 41(3), 3–4 (2011)
Global Harvest Initiative: The 2012 Global Agricultural Productivity Report, http://goo.gl/GBPvjK
J. Reid: The impact of mechanization on agriculture, Nat. Acad. Eng. Bridge, Issue Agric. Inf. Technol. 41(3), 22–29 (2011)
E.J. Van Henten: Greenhouse mechanization: State of the art and future perspective, Acta Hortic. 710, 55–69 (2006)
S.T. Nuske, S. Achar, T. Bates, S.G. Narasimhan, S. Singh: Yield estimation in vineyards by visual grape detection, IEEE/RSJ Int. Conf. Intell. Robots Syst., San Francisco (2011)
Q. Wang, S.T. Nuske, M. Bergerman, S. Singh: Automated crop yield estimation for apple orchards, Int. Symp. Exp. Robotics, Quebec City (2012)
J. Deere: Integrated AutoTrac System, http://goo.gl/CsxxfL
K. Gallardo, M. Taylor, H. Hinman: 2009 Cost estimates of establishing and producing gala apples in Washington, Washington State University Extension Fact Sheet FS005E, http://goo.gl/BHBZrb (2010)
B. Hamner, M. Bergerman, S. Singh: Results with autonomous vehicles operating in specialty crops, IEEE Int. Conf. Robotics Autom., St. Paul, MN (2012)
G. Giacomelli, N. Castilla, E.J. Van Henten, D.R. Mears, S. Sase: Innovation in greenhouse engineering, Acta Hortic. 801, 75–88 (2008)
C.W. Bac, E.J. Henten van, J. Hemming, Y. Edan: Harvesting robots for high-value crops: State-of-the-art review and challenges ahead, J. Field Robotics 31(6), 888–911 (2012)
J.C. Noordam, J. Hemming, C. Heerde van, F. Golbach, R. Soest van, E. Wekking: Automated rose cutting in greenhouses with 3D vision and robotics: Analysis of 3D vision techniques for stem detection, Acta Hortic. 691, 885–892 (2005)
T. Rath, M. Kawollek: Robotic harvesting of Gerbera Jamesonii based on detection and three-dimensional modeling of cut flower pedicels, Comput. Electron. Agric. 66, 85–92 (2009)
R. Berenstein, O.B. Shahar, A. Shapiro, Y. Edan: Grape clusters and foliage detection algorithms for autonomous selective vineyard sprayer, Intell. Serv. Robotics 3, 233–243 (2010)
E.J. Van Henten, B.A.J. Van Tuijl, G.J. Hoogakker, M.J. Van Der Weerd, J. Hemming, J.G. Kornet, J. Bontsema: An autonomous robot for de-leafing cucumber plants grown in a high-wire cultivation system, Biosyst. Eng. 94, 317–323 (2006)
M. Monta, N. Kondo, Y. Shibano: Agricultural robot in grape production system, Int. Conf. Robotics Autom. (1995)
Wikipedia: Controlled Traffic Farming, http://goo.gl/FODY04
J. Jin, L. Tang: Coverage path planning on three-dimensional terrain for arable farming, J. Field Robotics 28(3), 424–440 (2011)
D.C. Slaughter, D.K. Giles, D. Downey: Autonomous robotic weed control systems: A review, Comput. Electron. Agric. 61, 63–78 (2008)
F.K. Van Evert, J. Samsom, G. Polder, M. Vijn, H.-J. Dooren van, A. Lamaker, G.W.A.M. Heijden van der, C. Kempenaar, T. Zalm van der, B. Lotz: A robot to detect and control broad-leaved dock (Rumex obtusifolius L.) in grassland, J. Field Robotics 28, 264–277 (2011)
G. Polder, F.K. Van Evert, A. Lamaker, A. De Jong, G.W.A.M. Van der Heijden, L.A.P. Lotz, T. Van der Zalm, C. Kempenaar: Weed detection using textural image analysis, 6th Bienn. Conf. Eur. Fed. IT Agric. (EFITA), Glasgow (2007), available online at http://edepot.wur.nl/28203
F.K. Van Evert, G. Polder, G.W.A.M. Van der Heijden, C. Kempenaar, L.A.P. Lotz: Real-time, vision-based detection of Rumex obtusifolius L. in grassland, Weed Res. 49, 164–174 (2009)
H. Böhm, J. Finze: Überprüfung der Effektivität der maschinellen Ampferregulierung im Grünland mittels WUZI unter differenzierten Standortbedingungen [Testing the effectiveness of mechanical control of docks in grassland with the WUZI under a variety of conditions.], http://goo.gl/BVdqgy (2004)
A.T. Nieuwenhuizen, J.W. Hofstee, E.J. Henten van: Adaptive detection of volunteer potato plants in sugar beet fields, Precis. Agric. 11, 433–447 (2009)
A.T. Nieuwenhuizen, J.W. Hofstee, J.C. Zande van de, J. Meuleman, E.J. Henten van: Classification of sugar beet and volunteer potato reflection spectra with a neural network and statistical discriminant analysis to select discriminative wavelengths, Comput. Electron. Agric. 73, 146–153 (2010)
A.T. Nieuwenhuizen, J.W. Hofstee, E.J. Henten van: Performance evaluation of an automated detection and control system for volunteer potatoes in sugar beet fields, Biosyst. Eng. 107, 46–53 (2010)
T. Bakker, H. Wouters, K. Van Asselt, J. Bontsema, L. Tang, J. Müller, G. Van Straten: A vision based row detection system for sugar beet, Comput. Electron. Agric. 60(1), 87–95 (2008)
T. Bakker, K. Van Asselt, J. Bontsema, J. Müller, G. Van Straten: A path following algorithm for mobile robots, Auton. Robots 29(1), 85–97 (2010)
T. Bakker, K. Asselt van, J. Bontsema, J. Müller, G. Straten van: Autonomous navigation using a robot platform in a sugar beet field, Biosyst. Eng. 109(4), 357–368 (2011)
J. Katupitiya, R. Eaton, T. Yaqub: Systems engineering approach to agricultural automation: New developments, 1st Annual IEEE Syst. Conf. (2007) pp. 298–304
D. Kohanbash, A. Valada, G.A. Kantor: Irrigation control methods for wireless sensor network, Am. Soc. Agric. Biol. Eng. Annual Meeting (2012), available online at http://goo.gl/3NJyXb
S. Singh, M. Bergerman, J. Cannons, B. Grocholsky, B. Hamner, G. Holguin, L. Hull, V. Jones, G. Kantor, H. Koselka, G. Li, J. Owen, J. Park, W. Shi, J. Teza: Comprehensive automation for specialty crops: Year 1 results and lessons learned, J. Intell. Serv. Robotics, Special Issue Agric, Robotics 3(4), 245–262 (2010)
B. Davis: CMU-led automation program puts robots in the field, AUVSI's Unmanned Syst.: Mission Crit. 2, 38–40 (2012)
R. Lenain, J. Preynat, B. Thuilot, P. Avanzini, P. Martinet: Adaptive formation control of a fleet of mobile robots: Application to autonomous field operations, Int. Conf. Robotics Autom. (2010) pp. 1241–1246
A. Shapiro, E. Korkidi, A. Demri, R. Riemer, Y. Edan, O. Ben-Shahar: Toward elevated agrobotics: An autonomous field robot for spraying and pollinating date palm trees, J. Field Robotics 26(6/7), 572–590 (2009)
S. Foix, G. Alenyà, C. Torras: Lock-in time-of-flight (ToF) cameras: A survey, IEEE Sensors J. 11(9), 1917–1926 (2011)
R. Klose, J. Penlington, A. Ruckelshausen: Usability study of 3D time-of-flight cameras for automatic plant phenotyping, Workshop Comput. Image Anal. Agric. (2009) pp. 93–105
G. Alenyà, B. Dellen, S. Foix, C. Torras: Leaf segmentation from ToF data for robotized plant probing, IEEE Robotics Autom. Mag. 20(3), 50–59 (2013)
G. Alenyà, B. Dellen, S. Foix, C. Torras: Robotic leaf probing via segmentation of range data into surface patches, IROS Workshop Agric. Robotics, Villamura (2012)
G. Alenyà, B. Dellen, C. Torras: 3D modelling of leaves from color and ToF data for robotized plant measuring, IEEE Int. Conf. Robotics Autom., Shanghai (2011) pp. 3408–3414
B. Dellen, G. Alenyà, S. Foix, C. Torras: Segmenting color images into surface patches by exploiting sparse depth data, IEEE Workshop Appl. Comput. Vis., Kona (2011) pp. 591–598
E.J. Van Henten, B.A.J. Van Tuijl, J. Hemming, J.G. Kornet, J. Bontsema, E.A. Van Os: Field test of an autonomous cucumber picking robot, Biosyst. Eng. 86, 305–313 (2003)
E.J. Van Henten, J. Hemming, B.A.J. Van Tuijl, J.G. Kornet, J. Bontsema: Collision-free motion planning for a cucumber picking robot, Biosyst. Eng. 86, 135–144 (2003)
E.J. Van Henten, J. Hemming, B.A.J. Van Tuijl, J.G. Kornet, J. Meuleman, J. Bontsema, E.A. Van Os: An autonomous robot for harvesting cucumbers in greenhouses, Auton. Robots 13, 241–258 (2002)
S. Hayashi, K. Shigematsu, S. Yamamoto, K. Kobayashi, Y. Kohno, J. Kamata, M. Kurita: Evaluation of a strawberry-harvesting robot in a field test, Biosyst. Eng. 105, 160–171 (2010)
S. Hayashi, S. Yamamoto, S. Saito, Y. Ochiai, Y. Kohno, K. Yamamoto, J. Kamata, M. Kurita: Development of a movable strawberry-harvesting robot using a travelling platform, Proc. Int. Conf. Agric. Eng. CIGR-AgEng 2012, Valencia, Spain (2012)
F. Georgsson, T. Hellström, T. Johansson, K. Prorok, O. Ringdahl, U. Sandström: Development of an autonomous path tracking forest machine – a status report. In: Field and Service Robotics: Results of the 5th International Conference, ed. by P. Corke, S. Sukkarieh (Springer, Berlin, Heidelberg 2006) pp. 603–614
M. Miettinen, M. Öhman, A. Visala, P. Forsman: Simultaneous localization and mapping for forest harvesters, IEEE Int. Conf. Robotics Autom. (2007) pp. 517–522
J. Roßmann, M. Schluse, C. Schlette, A. Bücken, P. Krahwinkler, M. Emde: Realization of a highly accurate mobile robot system for multi purpose precision forestry applications, The 14th Int. Conf. Adv. Robotics (2009) pp. 133–138
J. Roßmann, P. Krahwinkler, A. Bücken: Mapping and navigation of mobile robots in natural environments. In: Advances in Robotics Research - Theory, Implementation, Application. German Workshop on Robotics, ed. by T. Kröger, F.M. Wahl (Springer, Berlin, Heidelberg 2009) pp. 43–52
S. Thrun, W. Burgard, D. Fox: Probabilistic Robotics (MIT Press, Cambridge 2005)
P. Krahwinkler, J. Roßmann, B. Sondermann: Support vector machine based decision tree for very high resolution multispectral forest mapping, IEEE Int. Geosci. Remote Sens. Symp. (2011) pp. 43–46
A. Bücken, J. Roßmann: From the volumetric algorithm for single-tree delineation towards a fully-automated process for the generation of virtual forests. In: Progress and New Trends in 3D Geoinformation Sciences. Lecture Notes in Geoinformation and Cartography, ed. by J. Pouliot, S. Daniel, F. Hubert, A. Zamyadi (Springer, Berlin, Heidelberg 2013) pp. 79–99
J. Roßmann, T. Jung, M. Rast: Developing virtual testbeds for mobile robotic applications in the woods and on the moon, The IEEE/RSJ 2010 Int. Conf. Intell. Robots Syst. (2010) pp. 4952–4957
A. Shiriaev, L. Freidovich, I. Manchester, U. Mettin, P. La Hera, S. Westerberg: Status of Smart Crane Lab Project: Modeling and Control for a Forwarder Crane, Department of Applied Physics and Electronics (Umeå University, Umeå 2008)
U. Mettin, P.X. La Hera, D.O. Morales, A.S. Shiriaev, L.B. Freidovich, S. Westerberg: Trajectory planning and time-independent motion control for a kinematically redundant hydraulic manipulator, Int. Conf. Adv. Robotics (2009)
U. Mettin, S. Westerberg, P.X. La Hera, A. Shiriaev: Analysis of human-operated motions and trajectory replanning for kinematically redundant manipulators, IEEE/RSJ Int. Conf. Intell. Robots Syst. (2009)
D. Morales, S. Westerberg, P. La Hera, U. Mettin, L. Freidovich, A. Shiriaev: Open-loop control experiments on driver assistance for crane forestry machines, IEEE Int. Conf. Robotics Autom. (2011)
A. Hansson, M. Servin: Semi-autonomous shared control of large-scale manipulator arms, Control Eng. Pract. 18(9), 1069–1076 (2010)
S. Westerberg, I. Manchester, U. Mettin, P. La Hera, A. Shiriaev: Virtual environment teleoperation of a hydraulic forestry crane, IEEE Int. Conf. Robotics Autom. (2008)
Y.-C. Park, A. Shiriaev, S. Westerberg, S. Lee: 3D log recognition and pose estimation for robotic forestry machine, IEEE Int. Conf. Robotics Autom. (2011)
NLIS: National Livestock Identification System, NSW Department of Primary Industries, http://goo.gl/WoHUi2
Lely home page: http://goo.gl/JCl8CI
Juno feed-pusher: http://goo.gl/V5H2Ye
Lely Vector feeder: http://goo.gl/dLiL3q
M. Dunn, J. Billingsley, N. Finch: Machine vision classification of animals, Mechatron. Mach. Vis. 2003: Future Trends: Proc. 10th Annual Conf. Mechatron. Mach. Vis. Pract. (Research Studies, Baldock 2003)
C. McCarthy, J. Billingsley, N. Finch, P. Murray, J. Gaughan: Cattle liveweight estimation using machine vision assessment of objective body measurements: First results, Proc. 28th Bienn. Aust. Soc. Anim. Production Conf., Vol. 28 (Australian Society of Animal Production, Wagga Wagga 2010)
Y. Wang, W. Yang, P. Winter, L.T. Walker: Non-contact sensing of hog weights by machine vision, Appl. Eng. Agric. 22(4), 577 (2006)
W. Zhu, F. Zhong, X. Li: Automated monitoring system of pig behavior based on RFID and ARM-LINUX, Third Int. Symp. Intell. Inf. Technol. Secur. Inf. (2010) pp. 431–434
S. Barry, D. Cook, R. Duthie, D. Clifford, D. Anderson: Future Surveillance Needs for Honeybee Biosecurity, RIRDC Publication No. 10/107, http://goo.gl/MUrPLo
Lely Astronaut A4 milking system: http://goo.gl/wYhUVH
J.P. Trevelyan: Sensing and control for sheep-shearing robots, IEEE J. Robotics Autom. 5(6), 716–727 (1989)
Robotics in the Poultry Industry, Poultry CRC: http://goo.gl/xH5vy6
A. J. Bubb, T. K. Driver, C. D. James: CASE STUDY - Redefining the cowboy: Precision pastoral decision making from remote monitoring and control of cattle, http://goo.gl/Vu0FlQ (2010)
R. Rankin: Further automation, Meat '93, Aust. Meat Ind. Res. Conf., Gold Coast. (1993), available online at http://goo.gl/lv2yjh
Performance Audit, Management of Project Fututech, The Meat Research Corporation, http://goo.gl/C5jMgm
Robotic Technologies Limited: RTL – Scott and Silver Fern Farms, http://goo.gl/xZxLOZ
Automated Boning Room System, Scott Group: http://goo.gl/0zFVqi
Chicken Whole Leg Deboning Machine, Mayekawa: http://goo.gl/xhj2Lf
Poultry slaughtering, Hyfoma: http://goo.gl/dzTfVT
Association for Unmanned Vehicle Systems International: The Economic Impact of Unmanned Aircraft Systems Integration in the United States, http://goo.gl/http://goo.gl/NS5EkT (2013)
J. Dong, L. Carlone, G.C. Rains, T. Coolong, F. Dellaert: 4D mapping of fields using autonomous ground and aerial vehicles, ASABE Int. Conf., Montreal (2014), Paper No. 141912258
F.K. Evert van, P. Voet van der, E. Valkengoed van, L. Kooistra, C. Kempenaar: Satellite-based herbicide rate recommendation for potato haulm killing, Eur. J. Agron. 43, 49–57 (2012)
C. Kempenaar, F.K. Evert van, T. Been: Use of vegetation indices in variable rate application of potato haulm killing herbicides, Proc. ICPA Conf., Sacramento, USA 2014 (2014)
C. Pugh, M. Jarman, S. Keyworth, J. Webber, I. Cameron: URSULA Agriculture, http://goo.gl/HzGkRT(2013)
S. Singh, S. Nuske, M. Bergerman, T. Bates, J. M. Peltier: Vineyard efficiency through high-resolution, spatiotemporal crop load measurement and management, http://goo.gl/iqXgWy
S. Blackmore: New concepts in agricultural automation, HGCA Conf. (2009)
J.L. Glancey, W.E. Kee: Engineering aspects of production and harvest mechanization for fresh and processed vegetables, HortTechnology 15, 76–79 (2005)
T. Burks, F. Villegas, M. Hannan, S. Flood, B. Sivaraman, V. Subramanian, J. Sikes: Engineering and horticultural aspects of robotic fruit harvesting: Opportunities and constraints, HortTechnology 15, 79–87 (2005)
K. Ellis, T.A. Baugher, K. Lewis: Use of survey instruments to assess technology adoption for tree fruit production, HortTechnology 20, 1043–1048 (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Video-References
Video-References
- :
-
Autonomous orchard tractors available from http://handbookofrobotics.org/view-chapter/56/videodetails/26
- :
-
Autonomous orchard vehicle for specialty crop production available from http://handbookofrobotics.org/view-chapter/56/videodetails/91
- :
-
Autonomous utility vehicle – R Gator available from http://handbookofrobotics.org/view-chapter/56/videodetails/93
- :
-
Automatic plant probing available from http://handbookofrobotics.org/view-chapter/56/videodetails/95
- :
-
Visual GPS – High accuracy localization for forestry machinery available from http://handbookofrobotics.org/view-chapter/56/videodetails/96
- :
-
Smart Seeder: An autonomous high accuracy seed planter for broad acre crops available from http://handbookofrobotics.org/view-chapter/56/videodetails/131
- :
-
A robot for harvesting sweet-pepper in greenhouses available from http://handbookofrobotics.org/view-chapter/56/videodetails/304
- :
-
Ladybird: An intelligent farm robot for the vegetable industry available from http://handbookofrobotics.org/view-chapter/56/videodetails/305
- :
-
An automated mobile platform for orchard scanning and for soil, yield, and flower mapping available from http://handbookofrobotics.org/view-chapter/56/videodetails/306
- :
-
A mini unmanned aerial system for remote sensing in agriculture available from http://handbookofrobotics.org/view-chapter/56/videodetails/307
- :
-
An autonomous cucumber harvester available from http://handbookofrobotics.org/view-chapter/56/videodetails/308
- :
-
An autonomous robot for de-leafing cucumber plants available from http://handbookofrobotics.org/view-chapter/56/videodetails/309
- :
-
The Intelligent Autonomous Weeder available from http://handbookofrobotics.org/view-chapter/56/videodetails/310
Rights and permissions
Copyright information
© 2016 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Bergerman, M., Billingsley, J., Reid, J., van Henten, E. (2016). Robotics in Agriculture and Forestry. In: Siciliano, B., Khatib, O. (eds) Springer Handbook of Robotics. Springer Handbooks. Springer, Cham. https://doi.org/10.1007/978-3-319-32552-1_56
Download citation
DOI: https://doi.org/10.1007/978-3-319-32552-1_56
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-32550-7
Online ISBN: 978-3-319-32552-1
eBook Packages: EngineeringEngineering (R0)