Abstract
This paper reports the integration of a remotely operated vehicle (ROV) solution for monitoring water quality in fish farms. The robotic system includes a RGB camera for real-time video capturing and a set of integrated sensors to measure hydro-climatic data. Computer vision algorithms were implemented with the aim of inspecting net-cages in fish farms. A comprehensive software solution was developed to allow a seamless use of the vision algorithms proposed in this work. Our system was designed to process underwater imagery captured by the ROV in order to determine net patterns associated with net failure. The system was tested in a dam under real conditions. ROC data were computed to demonstrate the accuracy of the proposed system during underwater fish cage inspection. On average, we obtained an accuracy of 0.91 regarding net pattern reconstruction tasks, while an accuracy of 0.79 for net damage detection under different underwater scenarios.
Similar content being viewed by others
1 Introduction
In recent years, the demand of fish protein has been rapidly increasing. According to the Food and Agriculture Organization (FAO), in 2018, the global production of animals from aquaculture production reached 73.8 million tones, with an estimated first-sale value of US$160.2 billions [3]. In this regard, around 50% of the seafood is being produced in fish farms.Footnote 1
Recently, the use of remotely operated vehicles (ROVs), autonomous underwater vehicles (AUVs) and autonomous surface vehicles (ASVs) have gained traction for both mapping and environmental monitoring tasks [7, 11, 14, 16, 19, 20]. Most of these ROV solutions in fish farming are used for inspection, cleaning [5, 13, 17] and water quality monitoring [8,9,10, 15, 17, 27]. In [2], a comprehensive revision of sensor technologies and specialized instrumentation for robotic-driven environmental monitoring, identified several challenges that needed to be addressed in order to adopt these robotic solutions: (i) reliability, safety, and endurance, (ii) human–robot interfaces, (iii) adaptive mission planning, (iv) real-time dynamic process tracking, and (v) event detection and classification. In this work, we present an approach for the integration of a remotely operated vehicle (ROV) solution by considering the challenges associated with reliability, human–robot interface, and event detection and classification.
This paper describes the deployment of the commercial OpenROV systemFootnote 2 and the integration of computer vision algorithms for underwater inspection in fish farms. The process of manually collecting water samples in fish farms is time consuming [23], and the manual inspection of net-cages could be challenging due to the turbidity of the water [18]. In this sense, our approach is twofold: (i) we describe the integration of sensors for water quality monitoring and (ii) the description of computer vision algorithms for detecting net-cage damages and obstructions. The OpenROV system is shown in Fig. 1.
The paper is organized as follows: Sect. 2 provides an overview of the OpenROV platform including the sensors and overall electronics used for underwater monitoring. It also presents the computer vision algorithms and methods for real-time detection of damages in net-cages. In Sect. 3, we conduct field experiments for water sampling and net-cage inspection under different scenarios. Conclusions and upcoming work are presented in Sect. 4.
2 Materials and methods
2.1 The OpenROV platform
The OpenROV is an open-source platform designed for underwater exploration. As shown in Fig. 1, the system is equipped with a HD camera with an array of 4 LEDS that achieve clear underwater imagery. The robot can be remotely operated via ethernet thanks to an extendable cable of 100m. The onboard electronics, actuators, and low-level control are driven by the BeagleBone ARM 1 GHz processor. The specifications are summarized in Table 1. In the following, we introduce a set of extra sensors to the platform to enable water quality assessment.
2.1.1 Water quality electronics
The electronics are composed by two parts: the EZO™ embedded circuitsFootnote 3 shown in Fig. 2d and the Tentacle Shield [26] shown in Fig. 2e. The EZO™ circuit is a small computer specifically designed to be used in robotics applications that require accurate and precise measurements of pH, ORP (oxidation–reduction potential), DO (dissolved oxygen) and temperature. This circuit connects via I2C protocol, offering the highest level of stability and accuracy. We used this device to read the information from the probes. The Tentacle is an arduino-driven shield that can host up to 4 EZO™ circuits to measure pH, ORP, DO and temperature. The Tentacle Shield isolates the sensors and eliminates issues, such as noise and ground loops that can arise using the sensor circuits in closed systems, such as aquariums, hydroponics or aquacultures. To communicate the Tentacle Shield with the OpenROV, it was necessary to use a bi-directional Logic Level Converter from 5 to 3.3 V.
2.1.2 IMU, compass and depth module
The heart of the navigation system in the OpenROV is an open-source, low-cost sensor module that enables highly accurate depth, compass heading and roll/pitch navigation data. Having this additional telemetry greatly help with navigation and allows for closed-loop control commands that will make the OpenROV much easier to operate autonomously. As mentioned, this module uses the I2C protocol to connect and communicate with the OpenROV. This module includes:
-
Pressure sensor. The MS5837 [24] is a high resolution pressure sensors (0–30 bar) with I2C bus interface for depth measurement with a water depth resolution of 2mm (0.2 mbar) and a maximum operational pressure of 30 bar (435 psi, 200 meter water depth).
-
IMU/Compass: BNO055.Footnote 4 This is a System in Package (SiP), integrating a triaxial 14-bit accelerometer, a triaxial 16-bit gyroscope with a range of 2000 degrees per second, a triaxial geomagnetic sensor and a 32-bit cortexM0+ microcontroller running Bosch Sensortec sensor fusion software embedded in a single package.
2.1.3 Communications
The entire system is controlled from an external computer via ethernet connection by using a TCP/IP protocol that facilitates communication through a web browser. Thus, we can transmit all the information from the robot’s sensors to the laptop, including video in real-time. The brain of the OpenROV is the Node.js framework, a JavaScript runtime that uses an event-driven, non-blocking I/O model in order to handle the connection between the user and the ROV (bidirectional channel for sensor data and control inputs). The underwater imagery and sensor data are sent over a Socket.io functionality [22], which enables real-time bidirectional event-based communication. Both Node.js and Socket.io are loaded into the BeagleBone Black embedded system and both communicate with the Arduino through a serial port, as shown in Fig. 3.
2.2 Water quality
The proper growth of aquatic organisms is highly dependent on water quality. Multiple factors can alter the physical and chemical properties of the water. A drastic change in temperature or concentration of dissolved oxygen in the water can result in massive flora and fauna damage. Also, less drastic changes might affect the ability of organisms to resist pathogens that are always present in the crop water. Chronic problems with suboptimal conditions will result in a slow growth rate and a higher rate of mortality. For a good production, it is necessary to maintain the environmental conditions of the water within the tolerance limits for the species being cultivated. In this regard, the maximum production will be achieved when all factors influencing the development of the organisms approach their optimum point. In this paper, water quality monitoring is made by measuring 4 variables: potential of hydrogen (pH), oxidation reduction potential (ORP), dissolved oxygen (DO) and temperature (T). The probes used to measure each variable are depicted in Fig. 2.
2.2.1 Potential of hydrogen (pH)
The pH is given by the hydrogen ion concentration and indicates whether the water is acidic or basic, and it is expressed on a scale ranging from 0 to 14. In this sense, a pH=7 means a neutral measurement. The changes in pH are mostly related to the concentration of carbon dioxide, which is strongly acidic. Individual plants require carbon dioxide for photosynthesis, so this process partly determines the pH fluctuation and as it rises during the day and decreases at night. Also, the pH stability is given by the so-called alkaline reserve or balance system (buffer) corresponding to the concentration of carbonate or bicarbonate. For the application at hand, the pH can have an impact on the fish population in conditions below 4 or above 11. Furthermore, drastic fluctuations even within the 4–11 range might cause several damage to fishes. Acidic waters irritate the gills of fish, which tend to be covered with mucus and in some cases with the histological destruction of the epithelium. In our system, we integrated a pH probe carefully calibratedFootnote 5 at 25 \(^{\circ }\)C using three references points of pH from specific calibration solutions: low-point (4), mid-point (7) and high-point (10). The probe can be fully submerged in both fresh or salt water indefinitely. Figure 2a shows the pH probe, while Table 2 details the technical specifications.
2.2.2 Oxidation reduction potential (ORP)
The oxidation–reduction potential (ORP) measures the oxygen in water. Higher values mean more oxygen and how much healthier the fish farm is. Likewise, the OPR allows the elimination of contaminants and dead plants. The ORP can provide additional information of the water quality and pollution. We integrated an ORP probeFootnote 6 carefully calibrated at 25 \(^{\circ }\)C using a single calibration point from a specific calibration solution set at 225 mV. Figure 2b shows the pH probe, while Table 3 details the technical specifications.
2.2.3 Dissolved oxygen (DO)
Fish breathe molecular oxygen (O2) dissolved in water. The oxygen concentration in water can be considered as the most important variable in aquaculture. In many ways, the level of oxygen is the best indicator of the general state of the fish farm. It is important to know the amount of oxygen dissolved in water and understand the many factors and their interactions that determine and influence this concentration. As the temperature of the water rises, this fluid loss the ability to keep gas in the solution. Therefore, it is more often to have problems with insufficient oxygen concentrations during the hottest time of year, when the water temperatures are higher. Likewise, the solubility of oxygen in water decreases as atmospheric pressure decreases, i.e., at higher altitudes (above sea level) water can hold smaller amounts of gas in the solution. We integrated a DO probeFootnote 7 calibrated as a function of temperature, salinity and pressure. Using external probes, calibration values were determined as: temperature (20 \(^{\circ }\)C), salinity (50,000 \(\upmu\)S or 36.7 ppt) and pressure (202.65 kPa). Figure 2b shows the DO probe, while Table 4 details the technical specifications.
2.2.4 Temperature
Fish are considered as ectothermic or poikilothermic organisms (cold-blooded). They cannot maintain a high and constant temperature in their bodies. Thus, the temperature of their bodies is a reflection of the water temperature where they are immersed. The body temperature of a fish largely influences its metabolic and growth rate. Also, they are animals adapted to environments that suffers gradual temperature changes. The tropical fishes (warm waters), develop better in water with a temperature between 25 and 32 \(^{\circ }\)C. In places with tropical or subtropical climates, the water temperature is maintained within this range most part of the year. Below 23 \(^{\circ }\)C, development is slow or delayed due to a decrease in its metabolic rate. When the water temperature exceeds 32 \(^{\circ }\)C, the fish will have very fast metabolisms. Although its growth can be very fast, the hot water does not have a good ability to hold oxygen. In general, fishes cannot withstand sudden changes in water temperature. We integrated a Platinum RTD (resistance temperature detector) probeFootnote 8 with a sensing range of − 200 to 850 \(^{\circ }\)C. Figure 2c shows the temperature probe.
2.3 Computer vision algorithms
As detailed by Fig. 1, the OpenROV is capable of underwater photo capturing and video recording thanks to the HD camera onboard. This section introduces the development of a set of computer vision algorithms for inspecting and detecting net-cage damages (fissures/holes). Figure 4 details the proposed software architecture for net-cage inspection. The software package was implemented in Python, and comprises a set of computer vision algorithms mainly divided into three modules: (i) a comprehensive graphical user interface (GUI), (ii) algorithms for underwater image pre-processing, and (iii) image analysis for net-cage damage recognition and net-damage detection. Figure 4 shows how these modules are interconnected. The insets in Fig. 4 show real underwater imagery. In the following, we introduce the methods that support our approach to underwater net-cage monitoring.
2.3.1 Graphical user interface—GUI
The GUI allows the user to monitor and configure all aspects regarding the ROV’s underwater inspection, such as the visualization of the mission, image processing options, and general configuration parameters. Figure 5 shows the main interface of the developed software. The upper left corner window (red circled labeled 1) shows real-time ROV’s video stream. The window labeled 2 displays the corresponding histogram of the current view, including RGB, HSV, Hue, among other histogram options. The windows 3 displays the data from the computer vision algorithms running seamlessly in the background. The data in Fig. 5 are displayed as follows:
-
Window labeled 1 when the system detects a possible anomaly in a certain region of the net-cage, the area is then enclosed in a red square.
-
Window labeled 3 The corresponding image of the enclosed region is displayed to the user. The software allows the user to perform two processing actions over the captured image: (i) net-cage damage detection, or (ii) fish obstruction detection.
-
Windows labeled 4, 5, 6 show the processing results for net-cage damage detection.
-
Windows labeled 7, 8, 9 show the processing results for fish obstruction detection.
The aforementioned functionalities are mainly supported by vision computer algorithms detailed in the following.
2.3.2 Image filtering
Our software has four noise reduction filtering methods: Gaussian, homogeneous, mean and bilateral filters. The Gaussian filter is applied by default. This filter is considered as an optimal filter method for ensuring smoothness response due to the Gaussian bell-shaped kernel. Also, most of the noise typically encounter in underwater images is associated to a Gaussian distribution, i.e., this kernel is suited for a more effective noise reduction.
As described by Eq (1), the Gaussian filter works by using a point-spread 2D distribution function. This is achieved by convolving the 2D Gaussian distribution function with the image. In Eq. 1, the term \(\sigma ^2\) is the variance, and \(\mu\) corresponds to the position of the center of the peak (bell-shaped). Figure 8 details experimental results for Histogram-based normalization color correction and Gaussian filter results applied to noise reduction. As shown, a kernel=10 provides smoothness results (plot c).
2.3.3 Otsu thresholding
The Otsu thresholding technique is a well-known adaptive method to obtain a binary image [6, 21, 25]. The algorithm calculates the optimum threshold separating two clusters within the image: foreground and background pixels. The Otsu method can be one-dimensional or two-dimensional, being the latter an improvement to the original Otsu method, especially for those images highly corrupted by noise. In our application, we have tested both approaches. Equations (2), (3) and (4) summarize the Otsu method.
where
The Otsu method searches for the threshold (t) that minimizes the variance defined as a weighted sum of variances of the two classes: foreground and background pixels, as determined by the within-class variance relation \({\rm sigma}_w^{^{2}}(t)\) in Eq. (2). The parameters \(q_1\) (background) and \(q_2\) (foreground) are the weights that represent the probabilities P(i) of the two classes according to the threshold (t), being \(\sigma _1\) and \(\sigma _2\) the variances for each class, as detailed in Eq. (5), i.e., the pixels (i) that belong either to the foreground or background classes. At the end, the value assigned to the threshold (t) is obtained when the variances to both classes are minimum. In summary, the Otsu algorithm finds the weight (q), mean (\(\mu\)), and the variance (\(\sigma ^{2}\)) for both the background and foreground of the image, to finally calculates the within-class variance \({\rm sigma}_w^{^{2}}(t)\) with the lowest sum of weighted variancesFootnote 9. Given that, the selected threshold (t) is defined. Figure 6 shows the results. The two-dimensional Otsu algorithm (plot d) achieved accurate results in terms of noise reduction and image binarization sharpness with a threshold value of 50.
2.3.4 Hough transform
Line detection and image perspective correction is achieved by applying the well-known Hough Transform (HT) method [1] . The HT requires the thresholded input image given by the Otsu method. The approach relies on using gradient (G) information to detect lines and circles within the image to properly use a rotation matrix that adjust the perspective. The gradient is a measure about how the pixels coordinate (x, y) changes according to the intensity.
The length of the vector \(\nabla f( x,y)\) in Eq. (7) indicates the size of the gradient, whereas \(\theta\) refers to the direction of the gradient, as:
Given Eq. (7), the direction of an edge in the image will be perpendicular to the gradient in any given point. Now, the approach is to use a polar coordinate equation for the representation of lines within the image, as defined in Eq. (8).
In Fig. 7a, the intersection point defined by \(r', \theta '\) corresponds to the lines that passes through two points (\(x_i,y_i\)) and (\(x_j,y_j\)). Therefore, given a gradient magnitude image G(x, y) containing a line, for all \(G(x_i,y_i)\) calculate \(r= x_i {\mathrm{cos}}\theta +y_i {\mathrm{sin}}\theta\) to find corresponding indexes to r according to Fig. 7a. Once the lines borders of the net have been detected, the centroids of the image are found by calculating the central moments. Since these features are independent of geometrical transformations, the centroids allow to determine the final transform for perspective correction. The central moments \(M_{i,j}\) are computed by following Eq. 9, as:
Where the centroids \(\{\overline{x},\overline{y}\}\) are calculated with the moments of order 0 or 1. The term I(x, y) corresponds to the pixel intensities I(x, y) [4]. Figure 7b details how the aforementioned process is applied to the acquired underwater imagery, whereas Fig. 8 details experimental results.
2.3.5 Convex hull
The Convex Hull algorithm was used for the reconstruction of the net patterns by connecting the knot-points, as shown in Fig. 9. This method is widely used for connecting vertices for geometric shape reconstruction [28, 29]. The algorithm can be divided into three stages: seed point identification, region splitting, and vertices connection. In our application, the seed points correspond to the knot-points previously calculated with the HT and the central moments. For the region splitting stage, the centroids must be found in order to determine an upper and lower regions based on the vertices distribution, by introducing a line that divides both regions. Therefore, the convex hull method starts the vertices connection along x-axis coordinate, following a clockwise direction for the upper region and a counterclockwise for the lower one. When the two regions are complete, both are concatenated by connecting the first with the last vertex in both regions.
3 Results
Experiments were conducted at the Betania dam located in the Department of Huila in Colombia.Footnote 10 Figure 10 details the fish cages. The dam is 30 km south of Neiva, a small city in Colombia. The dam has several purposes: the generation of electric power, the irrigation of lands, and fish farming. It has an area of 7400 ha square and a maximum depth of 76 m. Its total volume is 1971 million m\(^3\) with a capacity to generate 540 MW of power. The experiments reported were performed with an average ambient temperature around 30 \(^{\circ }\)C. The OpenROV was remotely operated to perform missions around the cages at six different depths. Using the depth control, the robot achieved steady immersions for all depth set-points. Our system collected several data: dissolved oxygen, pH, ORP, temperature, and trajectory logs. Overall, 360 samples were captured.
3.1 Water quality monitoring and analysis
As previous mentioned, the ROV captured physicochemical variables at each depth. These experiments were conducted in a fish farm that produces Tilapia, which is a freshwater fish that grows in the region. The proper growth of the fish is achieved under the following characteristics: water temperature 25.0–32.0 \(^{\circ }\)C, dissolved oxygen 5.0–9.0 mg/l, pH 6.0–9.0 and ORP 400–600 mV. In terms of pH, the Tilapia grows best in waters with neutral or slightly alkaline pH, i.e., pH values between 6.5 and 9. As observed in Fig. 11a, the readings indicated a proper environment for tilapia farming near to the surface; however, we encountered the pH increased with the depth, since there is a large difference between the water temperature near to the surface (ambient temperatures of 30 \(^{\circ }\)C) with the lower temperatures at deeper levels [12]. This finding allowed farmers to redesign the cages to limit the depth of the cage during the summer/dry season.
Another important factor for tilapia farming relies on the concentration and availability of dissolved oxygen (DO). Normally, waters with rich nutrients exhibit abundant oxygen levels in the middle of the afternoon. One factor that causes considerable variations in oxygen levels is the light, particularly, when it is cloudy. Sunlight and plankton, through the process of photosynthesis, are responsible for much of the oxygen produced. Therefore, when low-light conditions occur and the photosynthesis process is restricted, problems occur with critical oxygen levels. Figure 11b shows the dissolved oxygen at different depths. As observed, when the temperature decreased, the amount of dissolved oxygen in water increased [12]. The DO means for each depth were: 7.6, 7.6, 7.69, 7.85, 7.95 and 8.24 mg/l.
In terms of water temperatures, the optimal range for Tilapia farming is 28–32 \(^{\circ }\)C. When the temperature decreases to 15 \(^{\circ }\)C, several biological factors are compromised, such as fish grow. Contrary, temperatures above 30 \(^{\circ }\)C requires more available oxygen in the water. Figure 11c shows the temperature at different depths. As the water depth increased, the temperature decreased [12]. The temperature means for each depth were: 30.04, 30.07, 30.06, 28.05, 27.06 and 26.05 \(^{\circ }\)C. These values were within the limits established for tilapia farming. Finally, we also observed coherent values for ORP. In general, it can be considered that values higher than 400 mV are suitable for tilapia farming. Figure 11d shows the ORP at different depths. The ORP means for each depth were: 474, 474, 475, 475, 476 and 476 mV. In general, all pH, DO and ORP values were within the limits established for tilapia farming.
3.2 Fish net-cage inspection
This section shows the results obtained using the onboard camera for the inspection of faults and defects in fish net-cages. Lighting conditions were optimal, allowing good visibility without using the LED lighting system integrated in the ROV robot. The inspection was carried out inside and outside the cages. During this inspection, we found some places where the net has small and large holes, as depicted in Fig. 12. Larger holes can be detected by specialized divers, but small holes are definitively difficult to detect. We conducted several experiments in which net imagery was captured under different scenarios. The computer vision algorithms applied the adaptive segmentation process by following the Otsu thresholding method. Subsequently, the Hough transform was used as a feature extraction technique to determine the damages by analyzing the shapes and connections that represent the net cage. Those shapes and connections defined the pattern of the net by using the convex hull method. Figure 13 shows a comprehensive description of the computational steps followed by our system.
In order to measure the accuracy of the system in terms of: i) net pattern reconstruction and ii) damage detection, ROC data were calculated for the underwater imagery captured with different scenarios:
-
Net pattern reconstruction (cf. Fig. 14a):
-
Scenario A: net imagery perfectly aligned.
-
Scenario B: net imagery with left-oriented horizontal perspective.
-
Scenario C: net imagery with right-oriented horizontal perspective.
-
-
Damage detection (cf. Fig. 14b):
-
Scenario A: net imagery with one damage.
-
Scenario B: net imagery with multiple damages.
-
Scenario C: net imagery with horizontal perspective.
-
Figure 14 shows the overall results. Over 210 images were analyzed for calculating the ROC data, yielding about 70 per each scenario.Footnote 11 Furthermore, a ROC space was calculated for each scenario, by calculating three metrics: true positive rate (TPR) or recall, false positive rate (FPR) and accuracy (ACC). Also, ROC curves show the FPR (specificity) versus TPR (sensitivity) results for the three scenarios evaluated. The accuracy is calculated as:
As observed in Fig. 14a, our methods were able to reconstruct the net patterns with an accuracy \({\mathrm{ACC}}>0.9\) for the imagery classified in the scenarios A or B. For those images in the scenario C, the accuracy achieved was \({\mathrm{ACC}}=0.8\). The decrease in performance is twofold: one associated to camera calibration issues, since most of the images that needed right-oriented horizontal perspective corrections, presented higher distortions at shorter focal lengths. The other reason is associated to the Hough Transform method, since it can give misleading results when the net patterns seem to be aligned by chance, concretely, in the boundaries of the image. As observed in Fig. 14a,b (images from scenario C), the pattern reconstruction failed around the image boundaries due to the algorithm was not able to detect the corresponding vertices conforming those net patterns. A possible solution will be to apply the enhanced Hough transform method to properly deal with perspectives and distortions, or even machine learning algorithms that enable to reconstruct the missing vertices around the image boundaries. Considering all the tested scenarios, we achieved an average net reconstruction accuracy of 0.91. These are promising results towards the autonomous inspection of net cages for fish farming.
In terms of damage detection, Fig. 14b shows the results. Our methods tend to be more accurate when there is multiple damages in the net (scenario B), decreasing the FPR. Again, the perspective and distortions difficult the damage detection for those images in the scenario C, with an overall accuracy \({\mathrm{ACC}}=0.6\). Considering all the tested scenarios, we achieved an average damage detection accuracy of 0.79. The entire imagery database used in this work can be accessed in the following link.Footnote 12
4 Conclusions
The proposed system allowed for the inspection of the fish cages faster than the traditional method used in the Betania dam. The traditional method requires 2 h of diving services for inspecting the net-cage, whereas our system was able to cover the entire area in 10 min. By integrating water quality sensors, we obtained reliable measurements allowing the farmers to easily assess the quality of the water. Additionally, the development of computer vision algorithms for damage inspection allowed to examine net damages with an average accuracy of 0.79. Traditional computer-vision algorithms were integrated using the OpenCV library and a costumed-designed software (c.f. Fig. 5) was implemented to facilitate the use of the proposed methods. The Hough Transform and the two-dimensional Otsu methods were used with the aim of identifying an entire net pattern based on the border and eye-point detection to finally compute knot-point connections that enabled the identification of damages. Other computer vision algorithms, such as morphological operations (erosion and dilatation), opening and closing and image contour analysis were also used along the entire image post-processing stage. Upcoming work plans to include a low-cost side-scan sonar to improve underwater visibility. Furthermore, the side-scan sonar will allow us to create high definition maps of the subsurface to enable precise underwater mapping via SLAM.
Notes
A comprehensive example about how the Otsu method can be applied can be found at: http://www.labbookpages.co.uk/software/imgProc/otsuThreshold.html.
Imagery can be downloaded at: https://www.dropbox.com/sh/4b3hxj7vu3gl8ry/AAD4CCI4z410EvstMSwGPau4a?dl=0.
References
Cantoni V, Mattia E (2013) Hough transform. Springer, New York, pp 917–918. https://doi.org/10.1007/978-1-4419-9863-7_1310
Dunbabin M, Marques L (2012) Robotics for environmental monitoring. IEEE Robot Autom Mag 19(1):20–23 (from the guest editors)
Food and Agriculture Organization of the United Nations -FAO (2016) The State of World Fisheries and Aquaculture 2016. Contributing to food security and nutrition for all. Rome, 200 pp. ISBN 978-92-5-109185-2
Flusser J, Suk T (2006) Rotation moment invariants for recognition of symmetric objects. IEEE Trans Image Process 15(12):3784–3790
Frost AR, McMaster AP, Saunders KG, Lee SR (1996) The development of a remotely operated vehicle (ROV) for aquaculture. Aquac Eng 15(6):461–483
Goh TY, Basah SN, Yazid H, Aziz Safar MJ, Ahmad Saad FS (2018) Performance analysis of image thresholding: Otsu technique. Measurement 114:298–307. https://doi.org/10.1016/j.measurement.2017.09.052
Hernández JD, Vidal E, Moll M, Palomeras N, Carreras M, Kavraki LE (2019) Online motion planning for unexplored underwater environments using autonomous underwater vehicles. J Field Robot 36(2):370–396. https://doi.org/10.1002/rob.21827
Hidalgo F, Mendoza J, Cuéllar, F (2015) ROV-based acquisition system for water quality measuring. In: OCEANS 2015-MTS/IEEE Washington, IEEE, pp 1–5
Johnson‐Roberson M, Bryson M, Friedman A, Pizarro O, Troni G, Ozog P, Henderson JC (2017) High‐resolution underwater robotic vision‐based mapping and three‐dimensional reconstruction for archaeology. J Field Robotics 34:625–643. https://doi.org/10.1002/rob.21658
Karimanzira D, Jacobi M, Pfuetzenreuter T, Rauschenbach T, Eichhorn M, Taubert R, Ament C (2014) First testing of an AUV mission planning and guidance system for water quality monitoring and fish behavior observation in net cage fish farming. Inf Process Agric 1(2):131–140
Kirkwood WJ (2007) Development of the DORADO mapping vehicle for multibeam, subbottom, and sidescan science missions. J Field Robot 24(6):487–495
Lind OT (2016) Textbook of limnology. Limnol Oceanogr Bull 25(4):137–138
Majeed FA, Alhmoudi T, Alghawi M, Alyammahi F, Alhosani H, Alloghani N (2018) Investigation report on underwater damage detection using laser proximtity sensors. In: 2018 Advances in science and engineering technology international conferences (ASET), pp 1–3. https://doi.org/10.1109/ICASET.2018.8376848
Manley JE, Halpin S, Radford N, Ondler M (2018) Aquanaut: A new tool for subsea inspection and intervention. In: OCEANS 2018 MTS/IEEE Charleston, pp 1–4. https://doi.org/10.1109/OCEANS.2018.8604508
Pagliai M, Ridolfi A, Gelli J, Meschini A, Allotta B (2018) Design of a reconfigurable autonomous underwater vehicle for offshore platform monitoring and intervention. In: 2018 IEEE/OES autonomous underwater vehicle workshop (AUV), pp 1–6. https://doi.org/10.1109/AUV.2018.8729776
Pedersen L, Smith T, Lee SY, Cabrol N (2014) Planetary LakeLander—a robotic sentinel to monitor remote lakes. J Field Robot 32(6):860–879
Per R, Kevin F (2016) Experimental evaluation of hydroacoustic instruments for ROV navigation along aquaculture net pens. Aquac Eng 74:143–156
Price C, Black KD, Hargrave BT, Morris JA Jr (2015) Marine cage culture and the environment: effects on water quality and primary production. Aquac Environ Interact 6(2):151–174
Rigaud V (2007) Innovation and operation with robotized underwater systems. J Field Robot 24(6):449–459
Sakagami N, Yumoto Y, Takebayashi T, Kawamura S (2019) Development of dam inspection robot with negative pressure effect plate. J Field Robot 36(8):1422–1435. https://doi.org/10.1002/rob.21911
Sha C, Hou J, Cui H (2016) A robust 2d Otsu’s thresholding method in image segmentation. J Vis Commun Image Represent 41:339–351. https://doi.org/10.1016/j.jvcir.2016.10.013
(2016). http://socket.io. Accessed 2 Sep 2016
Tavares LHS, Santeiro RM (2013) Fish farm and water quality management. Acta Sci. Biol. Sci. 35(1):21–27
TE-Connectivity: Ms5837-30ba (2017). http://www.te.com/usa-en/product-CAT-BLPS0017.html
Wang F, Li J, Liu S, Zhao X, Zhang D, Tian Y (2014) An improved adaptive genetic algorithm for image segmentation and vision alignment used in microelectronic bonding. IEEE/ASME Trans Mech 19(3):916–923
(2016). https://www.whiteboxes.ch. Accessed 2 Sep 2016
Wynn RB, Huvenne VAI, Le Bas TP, Murton BJ, Connelly DP, Bett BJ, Ruhl HA, Morris KJ, Peakall J, Parsons DR, Sumner EJ, Darby SE, Dorrell RM, Hunt JE (2014) Autonomous underwater vehicles (AUVs): their past, present and future contributions to the advancement of marine geoscience. Marine Geol 352:451–468
Yahya Z, Rahmat RWOK, Khalid F, Rizaan A, Rizal AS (2017) A concave hull based algorithm for object shape reconstruction. Int J Inf Technol Comput Sci 9:1–9
Ye QZ (1995) A fast algorithm for convex hull extraction in 2d images. Pattern Recognit Lett 16(5):531–537. https://doi.org/10.1016/0167-8655(95)00122-W
Acknowledgements
This work was funded by the Pontificia Universidad Javeriana (PUJ) in Bogota, Colombia within the project framework: Fish farm monitoring using a remotely operated underwater robot, Grant ID: 6895. This project was also supported by the Grant ID: 8305 (Navigation control of an underwater robot for monitoring and inspection). The Authors thank to the Betania dam authorities for granting permissions to Dr. William Coral for conducting the experiments presented herein. The computer vision algorithms presented in this paper including the comprehensive experimental results are also detailed in the thesis document from J. Betancourt: https://repository.javeriana.edu.co/handle/10554/21415.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Betancourt, J., Coral, W. & Colorado, J. An integrated ROV solution for underwater net-cage inspection in fish farms using computer vision. SN Appl. Sci. 2, 1946 (2020). https://doi.org/10.1007/s42452-020-03623-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s42452-020-03623-z