Fully Automatic Visual Servoing Control for Underwater Vehicle Manipulator Systems Based on a Heuristic Inverse Kinematics

The use of underwater vehicle manipulator systems (UVMS) equipped with cameras has gained significant attention due to their capacity to perform underwater tasks autonomously. However, controlling both the manipulator and the remotely operated vehicle (ROV) based on the vision system information is not an easy task, especially in situations where the vehicle cannot be parked/held stationary. Most of the existing approaches work based on complex matrix calculations for the inverse kinematics (IK), which can lead to high computational costs and the need to deal with singularity problems. A problem arises when the amount of time needed to calculate the UVMS configuration can result in reduced frequency of target pose estimation, beyond the point where the target has moved out of the camera field of view. Therefore, this paper proposes an autonomous visual servoing approach for UVMS, including an extension of a heuristic technique named M-FABRIK (Mobile - Forward and Backward Reaching IK) to calculate the UVMS inverse kinematics in a simple and fast way. This approach aims to control both the configuration of the manipulator and ROV position in order to allow underwater intervention in situations where the ROV cannot be parked/held stationary. This solution allows the vehicle to be positioned according to additional criteria, besides avoiding matrix inversion and being robust to singularities. Trials have been performed with a manipulator mounted on a work-class ROV for an autonomous underwater monitoring task and results demonstrate a simple and fast approach, which is able to set the configuration of the manipulator as well as the ROV for visual servoing applications in real-time, such as for monitoring, tracking and intervention tasks underwater.

parking on the seabed or a secure structure onto which the robot can be clamped [11]. Hence, the manipulation task must be performed while the vehicle is hovering. In these cases, highly skilled pilots are required, as, besides controlling manipulators, the vehicle has to be operated in order to keep it as stable as possible relative to the target, by compensating for external motion disturbances (sea current, waves, tides) and even its own motion induced by the manipulator's reaction forces/moments. Furthermore, the pilot may have to manage a series of secondary objectives, which can include: avoiding contact with the environment, adjusting the vehicle camera view through ROV motions, and moving the ROV to ensure that the joints of the manipulator do not reach the limits of travel [12].
According to Cooke in [13], due to the complexity of the tasks, pilots eventually get fatigued which leads to a significant reduction of task effectiveness. Therefore, recent research efforts are aimed at developing completely autonomous or semi-autonomous approaches to control both the arm and the ROV in order to perform the tasks. These approaches are normally combined with a visual feedback so that the robot can perform the task more precisely with the manipulator's end-effector.
In order to set the configuration of the UVMS, some authors present solutions to this problem through control approach, such as in [14,15] and [16]. Others, present solutions based on inverse kinematics, since underwater manipulators normally have low level controllers embedded and UVMS are always kinematically redundant, due to the mobility provided by the vehicle itself in addition to the degrees of freedom provided by the manipulator arm [17].
Many of the approaches developed for UVMS deals with the inverse kinematics based on the pseudo-inverse of the Jacobian. Nevertheless, if the inverse kinematics is part of visual servoing approach, the problem becomes even more challenging, due to the fact that the amount of time needed to converge to a solution directly affects the frequency of target pose estimation within the visual servo loop. So, if the algorithm takes too long to calculate the IK, the target object might have migrated out of the camera field of view before the next image is taken for the pose estimation, causing the opening of the visual feedback control loop because of the lack of visual measurements and eventually leading to servoing failure [7].
Such situation was observed in [7], where the authors mentioned problems with the amount of time taken to calculate the IK using the second-order pseudo inverse of the Jacobian approach. In general, approaches based o the pseudo-inverse of the Jacobian tend to require high computation cost due to the complex matrix calculations and the need to deal with singularity problems.
Some approaches, as the one presented in [18] overcome the problem of convergence time being suitable for real-time applications using the pseudo-inverse of the Jacobian and a solution based on the activation and deactivation of tasks. However, the approach still deals with matrix calculations, in which the size of the matrices changes according to the number of degrees of freedom and the number of tasks assigned to the robot. Therefore, we propose a complete autonomous visual servoing approach for UVMS which is able to set the configuration for the manipulator and ROV position in a simple and fast way, avoiding matrix inversion and being robust to singularities. The approach is based on an extension of an IK technique that we developed recently, named M-FABRIK (Mobile -Forward and Backward Reaching IK) [19], a mobile version of the iterative solver for the IK problem presented in [20]. The proposed approach allows underwater intervention in real-time, even for situations where the ROV cannot be parked or clamped on to the target structure. Furthermore, the solution allows the vehicle to be positioned according to additional criteria and the algorithm is extended to deal with practical problems due to uncertainties or disturbances. This approach has as major contributions the following points: • Extension of M-FABRIK IK: M-FABRIK is extended to deal with the possible movements of the UVMS in the three-dimensional (3D) work-space and also to receive a constant feedback from the visual system; • Fast approach: Based on one of main characteristics of FABRIK, the approach has as one of the main advantages the fast convergence to a solution, which allows real-time applications; • Simple Visual Servoing approach for a highly redundant system: from the target estimation to set both the configuration of the manipulator and the ROV position the whole system works entirely in the workspace, thus with no need to use the configuration space nor matrices inversion; • Tested on a commercial work-class UVMS: the approach was successfully tested in a commercial workclass ROV equipped with hydraulics manipulators for a task of underwater inspection of multiple targets. These robots are not really developed for autonomous application, so additional challenges may arise with, as the slow serial communication with the manipulators.
This paper is structured as follows. Firstly, we present some related works in Section 2. We introduce the M-FABRIK algorithm in Section 3. In Section 4, the proposed approach is presented. Section 5 presents the experimental setup. Results are presented in Section 6 and conclusions in Section 7.

Related Work
The configuration of a UVMS means the set all joint angles of the manipulator as well as the pose of the ROV in 3D space. This configuration must be set in order to pose the end-effector in a desired position to perform the task. However, this is not an easy task, especially because UVMS are always kinematically redundant, due to the combination of the mobility provided by the vehicle with the degrees of freedom provided by the manipulator.
Thus, there are many approaches that focus on solving the redundancy problem. Antonelli and Chiaverini proposed an approach in [21], in which the base ROV is positioned in order to reduce energy consumption and increase the system's manipulation capability. They also proposed another approach in [22], with a focus on finding an optimal posture considering the limits of joints and minimizing the movements of roll and pitch angles. Ismail et al. proposed in [23] an approach aimed at minimizing gravity and load fluctuation. In the approach proposed in [12], the focus was on minimizing base movement. Neural networks are also used in [24] to deal with uncertainties and disturbance problems.
In order to pose the end-effector in the desired position, the aforementioned approaches take into consideration the global pose of the UVMS. However, as stated in [25], it is difficult to obtain precise earth-fixed pose information in a real experiment with only on-board sensors. Thus, the use of vision information to control the end-effector, known as visual servoing, has gained significant attention in underwater autonomous interventions [25].
In [26] an image-based visual servoing (IBVS) for redundant UVMS with an eye-in-hand camera was proposed taking into consideration system constraints, dynamic uncertainties, and the absence of vehicle velocity sensors for a typical UVMS. For that, they propose a hierarchical control architecture composed of an unscented Kalman filtering (UKF)-based vehicle motion estimator, a kinematic model predictive IBVS controller, and two decoupled dynamic velocity controllers for the vehicle and the manipulator, respectively. In [25] an uncalibrated visual servoing scheme for a UVMS is proposed with an eye-inhand camera under uncertainties, including visual system parameters, UVMS kinematics and feature position.
Although there are a number of works using visual servoing for UVMS that have been published recently, the area is still under explored [25]. A proof of this is that most of these works only present results by simulation. Some works that performed real experiments are presented by Simetti in [27], who presented a great review of recent works in the area. Among these works [28] and [29] can be highlighted. These works present visual servoing experiments in which fiducial marks are attached to the target and tasks of opening valves, monitoring and connector plug operations are performed. All of these works were implemented on research prototype ROVs equipped with electrical manipulators and the experiments were performed in pools or water tanks.
When we consider experiments performed with commercial hardware, such as work-class ROVs equipped with robust hydraulic manipulators the number of works is even lower. One of the most cited works is [7]. In their work, Sivcev et al. used an underwater manipulator for a valve opening task with a position-based visual servoing (PBVS) system, also performed based on detection of fiducial marks. However, the approach presented by the authors considers the ROV parked on the seabed to perform the task.
Despite the successful accomplishment of the task, the authors mention a problem on the amount of time taken by the inverse kinematics approach which directly affects the frequency of target pose estimation. This problem could even result in the object leaving the camera field of view before the next image is taken for the pose estimation, causing the opening of the visual feedback control loop because of the lack of visual measurements and eventually leading to servoing failure.
The inverse kinematics implemented in [7] is based on the second order pseudo-inverse of the Jacobian. Nevertheless, the inversion of the Jacobian matrix requires matrix calculations that can lead to high computational cost and the need to deal with singularity problems.
Although this is a well-known problem, using the pseudo-inverse of the Jacobian is still the classical approach and some approaches have been proposed to overcome the singularity problem and the computational cost. An example is the work proposed in [18] which presents a task priority based control with the ability to enable and disable tasks without discontinuity. In this work, the authors deal with the singularity problem including an additional regularization cost and the approach has low cost, being able to be performed in real-time, even for high redundant systems.
However, the approach still includes matrix inversions and the size of the matrices changes according to the number of degrees of freedom and the number of tasks assigned to the robot.
Many approaches have been developed in the meanwhile aimed at avoiding both the Jacobian matrix and matrix inversions. However, most of them are applied to fixed-based manipulators, such as the classical heuristic algorithms CCD (Cyclic Coordinate Descent) [30] and the Triangulation [31]. Another heuristic algorithm, which has been gaining great attention recently is called FABRIK (Forward And Backward Reaching Inverse Kinematics) [20].
FABRIK avoids the use of rotational angles or matrices by finding each joint position via locating a point on a line in the work space. This algorithm normally converges in a few iterations and it has a low computational cost. Thus, the great advantage of FABRIK is its fast convergence and simplicity. Moreover, it deals with constraints and joint movement limits.
FABRIK was originally developed for the computer graphics area [20], but due to its advantages, FABRIK has also been applied for robotic manipulators [32] and recently we proposed in [19] an extension for FABRIK to deal with manipulators mounted on a wheeled omnidirectional robot, named M-FABRIK.
In this paper, we propose to extend M-FABRIK to work on a 3D underwater environment in order to be applied to a UVMS. For this extension, the algorithm is able to set the entire configuration of the manipulator as well as the ROV position in the work space. Furthermore, The ROV position can be easily defined by means of several important criteria, such as the ones presented in [12], which include avoiding contact with the environment, keeping a certain distance from the seabed, adjusting the vehicle camera view, respecting joint limits and so on.
In the next section we present how the original algorithm of M-FABRIK works, then in Section 4 we present the extension proposed in this work.

M-FABRIK
M-FABRIK is a new algorithm we proposed in [19] which consists of an extension of FABRIK for manipulators mounted on omnidirectional wheeled platforms. As the original approach, M-FABRIK also works based on minimizing the error between the target point and the endeffector in two stages, first in the end-effector -platform direction, and then in the platform -end-effector direction. Nevertheless, besides setting the position of each joint, M-FABRIK also sets the position of the platform. Figure 1 presents an example of a complete iteration of M-FABRIK. In this example, p 1 p 2 , p 3 and p 4 represent the joints of a manipulator mounted on a mobile platform and the letter t indicates the target point. For this mobile manipulator, p 1 is attached to the platform and p 4 represents the end-effector.
In the first stage, named Forward Reaching, the algorithm starts placing the end-effector in the target position, so p 4 is its new position, as presented in Fig. 1b. A straight line is then traced from p 4 to the next joint p 3 and, by limiting this line according to the length of the manipulator link, the new position of the third joint is selected to p 3 , as presented in Fig. 1c. The same process is repeated for the next joints until a new position p 1 is defined. At this point, the Forward Reaching stage is over and p 1 is usually out of its original position. As the classical FABRIK was proposed for fixed-base manipulators, the second stage, Nonetheless, for the M-FABRIK algorithm, the base is mounted on a mobile platform, which allows the Backward Reaching stage to be performed from a different place rather than only from its original position. The new position of the base can be arbitrarily chosen based on different criteria which respects the constraints of the robot.
Aiming to decrease the number of iterations and the convergence time, we proposed choosing the closest point, by projecting the base in the plane where the mobile platform can move, as presented in Fig. 1d. Then, the platform can move to place the base in the new position p 1 and the Backward Reaching stage is performed from this point, as presented in Fig. 1e.
The execution of the Backward Reaching stage might lead the end-effector to be out of the target position. Then, Forward and Backward Reaching stages are performed iteratively until the robot reaches the target from an acceptable position for the platform.
The example presented in Fig. 1 shows a simple application in which all the joints of the manipulator works in the same plane. However, it is important to highlight that FABRIK contemplates different types of joint models and their constraints [33]. Consequently, M-FABRIK also contemplates it.
Nevertheless, the main advantage of M-FABRIK is the possibility to allow the choice of the base position in order to accomplish additional requirements besides reaching the target, such as avoiding obstacles, respecting joint limits, increasing manipulability or decreasing joint effort. For that purpose, the concept of an admissible area was introduced, which consists of the region where the base can be positioned to assure the target reachability and the accomplishment of the additional requirements. This area is created by setting the sub-region from where the robot can reach the target and subtracting sub-regions where the base would violate the additional objectives.
All the ideas proposed in [19] to extend FABRIK to manipulators mounted on an omnidirectional wheeled platform are now extended to underwater vehicle manipulator systems in this paper and this approach is presented in the next section.

Extending M-FABRIK for UVMS
The main idea of extending M-FABRIK to UVMS is based on extrapolating the concept of the admissible area from the 2D plane to the 3D environment. Since the navigable area for an ROV consists of the entire 3D underwater space, at the end of the Forward Reaching stage the base of the manipulator is moved to a position where the robot can move to. Therefore, the Backward Reaching stage is not even necessary in this case, and the algorithm can reach a final configuration in less than one complete iteration. Figure 2 shows an example of the approach being applied to theÉtaín MRE-ROV of the CRIS Centre in the University of Limerick, a work class ROV equipped with two Schilling Orion manipulators. In this figure, the navigable area is the entire 3D cube and the admissible area is represented by the green sphere S, with a radius defined by the maximum extension of the manipulator.
Nevertheless, the ability of M-FABRIK to choose the base position according to different criteria is also applied to the underwater environment by introducing additional requirements to the algorithm. For example, in some operations, it is preferable that the ROV perform the task keeping a safe distance to an obstacle or even the seabed. In this case, the Backward Reaching stage cannot be neglected and at the end of the Forward Reaching stage the base must be projected to an acceptable region. Figure 3 presents an example of such a situation. In this figure, the region in red corresponds to the rejection area where the ROV cannot be placed for safety reasons. Thus, the admissible area does not consist of a complete sphere anymore, as presented by the green region in Fig. 3a, and the base must be projected to the admissible area before the Backward Reaching stage starts. Therefore, extrapolating the areas of navigation, rejection and admissibility to the 3D underwater space is the key feature to extend M-FABRIK to UVMS.
However, due to some characteristics inherent to real applications, the robot might not be able to perform the calculated inverse kinematics configuration, and consequently, the UVMS will not reach the desired position correctly. This problem can occur for some reasons, such as the dynamic influence of the base movement on the manipulator and vice versa, accuracy limitation of the robot's actuators, external disturbances, or even errors in estimating the position of the target in cases of approaches using a computer vision system. In addition, considering the underwater environment, the dynamics of the movements are slow and it can take considerable time for the robot to reach the desired position. All these problems can lead the algorithm to fail in completing a task with multiple targets or a moving one.
Nevertheless, the algorithm is integrated to a visual servoing system and since M-FABRIK works iteratively with fast convergence time, we create a constant feedback with the captured images. Thus, instead of waiting for the robot to perform the calculated configuration, the algorithm searches for a new configuration in each loop, taking into consideration the current configuration and target estimation based on the sensory data. This approach allows the application of the algorithm for multiple targets or for a moving one since it is performed continuously, rather than simply performing the calculated configuration step by step.
The integration of M-FABRIK to a visual servoing system with an eye-in-hand camera adds an additional challenge to the algorithm though. The camera needs the be pointed out to the target, so the orientation of the endeffector must also be set, rather than only its position. The original algorithm of FABRIK does not include the definition of the end-effector orientation, possibly because the algorithm was developed for the computer graphics area, and its extensions to the robotics area do not include experiments with fixed orientation. Therefore, in this paper, we propose to overcome this issue by placing the last joint of the manipulator in a virtual line that passes through both the end-effector and the target point during the forward reaching stage. Thus, we can set the end-effector orientation and the generated position ensures that the camera will be pointed to the target.
The complete approach for the autonomous visual servoing is presented in the next section.

Fully Automatic Visual Servoing for UVMS based on M-FABRIK
This section describes the entire system that can provide the ability for the UVMS to perform an underwater tasks autonomously. Figure 4 shows a block diagram to illustrate how the entire system works. First of all, the developed system works based on the estimation of the target position acquired from a single camera mounted on the end-effector of the manipulator. For that, the targets are represented as fiducial markers, such as the one presented in Fig. 5. Having the information of the exact geometry of the fiducial marker and a previously calibrated camera, the marker position relative to the camera can be estimated by employing a planar homography based algorithm [34].
The algorithm provides as an output the estimated pose of the target in the eye-in-hand camera reference frame (T cam ). Then, forward kinematics is used to obtain the relative pose of the camera in the manipulator frame, and homogeneous transformations are performed to estimate the target pose in the ROV body frame (T ROV ).
Once the target position is defined in the ROV body frame, M-FABRIK algorithm receives its pose (ξ R ) and also the joint angles (q i ) from the robot's sensors, to estimate the mobile manipulator joints position (p i ).
Then, the algorithm calculates the new angles of the manipulator's joints (q i ), as well as the new pose of the ROV (ξ R ). Finally, the desired angles for the joints are sent to the manipulator slave control unit (SCU) and the ROV pose is sent to the ROV control system, leading the entire robot to reach the target. This process takes place continuously, so that the robot configuration is always updated based on the target position. This feature allows the method to be able to correct system imperfections or external disturbances that may occur during the task execution. Furthermore, it is possible to apply the algorithm to track multiple targets or moving ones.
The proposed method is illustrated in pseudo-code in Algorithm 1.

Algorithm 1 PBVS for UVMS.
It is important to highlight that both the original algorithm of FABRIK [20] and its extended version to mobile manipulators [19] work in the global coordinate frame. Nevertheless, integrating a PBVS system to the M-FABRIK allows the algorithm to be performed in the robot's base frame, with no need to transform it to the global coordinate system. This feature is important because the transformation to the global frame would depend on the sensory data of the ROV, which may have errors associated.
The proposed approach was tested on a work-class ROV in an underwater environment considering multiple targets.

Experimental Setup
The performed experiment aims to show the feasibility and advantages of the proposed approach by performing an underwater monitoring task with an underwater vehicle manipulator system. The test was performed in the Portroe flooded quarry in County Tipperary, Ireland. Figure 6 shows the experimental setup overview, highlighting the monitoring and control cabin Fig. 7, the ROV and the wall where the autonomous monitoring was performed.
The experiment was performed with the ROVÉtaín presented in Fig. 8. This robot is equipped with two Schilling Orion 7P hydraulic manipulators, several navigation sensors and seven thrusters, four horizontal and three verticals, which give the ROV six degrees of freedom.
Orion 7P manipulators have six rotational joints and a jaw with opening control as end-effector. The rotation axes   Table 1.
To carry out the experiment, a GigE IDS uEye camera (UI527xCP-C) GigE with Sony CMOS sensor 1/1.8" was mounted on the end-effector of one of the manipulators. This configuration, known as eye-in-hand, has as main advantages increasing the measurement resolution when the manipulator approaches the target, which is an essential point for low visibility conditions, as is common in underwater tasks [35]. Based on the images acquired with this camera, a computer vision system was then developed to detect the desired target points in the environment and estimate their position in relation to the robot.
A common task in the field of underwater robotics is defined as the objective for testing the proposed approach with the ROVÉtaín. The experiment consisted of a monitoring task, in which the robot is tasked to navigate in the environment and capture images at specific target points. This approach can be used for inspection of ship hulls, platforms, bridge structures, etc.
Thus, ten different fiducial markers were placed on a wall of the flooded quarry, as shown in Fig. 10 and the robot Since the images should be captured at 1.70 m away from each target, the target position used in the M-FABRIK algorithm were not represented exactly by the position of the markers in the environment, but by virtual points created based on the required distance.
For security reasons, it was also requested that the ROV base should never come closer than 3 m from the wall. This additional objective was accomplished through the creation of a restriction area in the M-FABRIK algorithm.
It is important to highlight that the joints 4 and 6 of the Orion 7P manipulator would cause a rotation movement in the camera, which would result in rotated images in relation to the horizontal plane of the environment. This rotation  would not be a problem for the system itself, since the algorithm used to detect the fiducial markers also provides the rotation of the marker related to the camera frame. However, the images acquired during the experiment could be rotated in different angles, requiring an extra stage to align all of them for a better visualisation of the data after the experiment. Thus, as the degrees of freedom given by the other joints and the ROV are enough to perform the task, joints 4 and 6 of the manipulator were maintained with angles fixed at 0 • and 90 • respectively throughout the experiment.
Regarding the base ROV positioning angles, roll and pitch were set to be 0 • and the yaw angle was fixed so that the robot flew perpendicularly to the wall.

Experimental Results
Once all the objectives and restrictions of the experiment were defined, the ROVÉtaín was manually positioned in front of the first fiducial marker, at a distance of 3 m from the wall, with the manipulator in a pose that allowed the first two marks to be in the camera's field of view. From this configuration, the system was switched to work in autonomous mode, with the M-FABRIK algorithm calculating all the joint angles and also the ROV position. Figure 11 shows a picture of the robot executing the task, which was successfully carried out, with the robot navigating parallel to the wall, detecting all the fiducial markers and positioning them in the center of the image correctly, as shown in Figs. 12 and 13. Video material of the trials is available on the CRIS YouTube channel [36]. Figure 14 shows a set of 9 consecutive frames to demonstrate the movement of the manipulator to position one of the fiducial markers in the centre of the image. This search for the markers was carried out throughout the experiment and a graph representing the distance from the centre of the image in relation to the centre of the markers during navigation is shown in Fig. 15. In this graph, the colour change represents the moment when the system detects the marker successfully and sets the next marker as a new target.
The graph shows that the algorithm was able to take the position error to a value less than 10 cm in the detection of all fiducial markers. Although 10 cm might sound like a high error value, it is important to highlight that the camera was positioned at a distance of approximately 1.70 m away from the targets, thus, small variations in the position increase considerably the relative distance from the centre of the marker to the centre of the image. Therefore, an error of 10 cm is more than enough to obtain images centred on the marks, as presented in Figs. 12 and 13.
The graph of Fig. 15 also shows a certain oscillation in the control performed to set the camera to the centre of the target. This oscillation results mostly for two reasons. The first reason is a rolling movement of the ROV due to the displacement of the centre of mass when frontal movements are performed by the manipulator. However, despite this  The second reason is the slow communication rate between the system and the Orion 7P manipulator. In its communication protocol, the Orion requires more than 100 ms between the readings of the sensory data. Because of this delay, even with the slow dynamics in the underwater environment, the control system is unable to act quickly enough to allow a less oscillatory response.
In [7], the authors state that this delay is due to the fact that most underwater manipulators are not normally designed for autonomous operation. Therefore, since the communication process with the manipulator is slow, it is necessary that the inverse kinematics algorithm used does not add even more delay to the control system.
In terms of convergence time, M-FABRIK has already been compared with other approaches in [19], presenting a better performance, especially when compared to a pseudo-inverse kinematics approach similar to the one used in [7]. The extended version for the UVMS works even faster, since that if the ROV does not reach a rejection area, the Backward stage is skipped. For the experiment presented in this section, the extended version of M-FABRIK algorithm operated with an average convergence time of only 0.061 ms. The maximum value spent to find a solution during the experiment was only 1.3 ms and the minimum was 0.001 ms. These values are negligible when compared to the communication process that takes more than 100 ms to be completed. A computer with a 3.4 GHz Intel Core i7 processor and 16GB of RAM was used in this experiment.
Equivalent results were achieved in the approach presented in [18]. Using a similar machine configuration with a 3.6 GHz Intel Core i7-4790 CPU, the authors achieved convergence rates lower than 1 ms, but for a UVMS with higher degrees of freedom. Thus, considering  the UVMS used for the experiment presented in this section, Simetti and Casalino's method may achieve values even lower than 1 ms and can also be used for such application in real-time.
A quantitative comparison between both approaches was not performed because microseconds of difference in the convergence time is negligible when compared to the communication rate. Furthermore, the fast convergence time of the proposed algorithm is only one of the main contributions of the approach, which is also highlighted for the simplicity to implement the whole visual servoing and the IK, the fact that the entire calculation for the solution is performed in the work-space and the number of iterations needed to converge to a solution. For instance, on the experiment presented in this section, the average number of iterations to find a solution based on the estimated position of the target was only 2.1069. The maximum value was 7 and the minimum 1.
The algorithm is also able to generate smooth movements for the UVMS during the experiment. Figure 16 shows the position of the ROV, as well as the end-effector estimated position throughout the experiment. It is important to highlight that the ROV navigated laterally throughout the experiment and did not perform any frontal movement due to the imposed safety restriction. The ROV position data were obtained based on the navigation sensors of the robot and the end-effector position was estimated based on the Orion 7P joint sensors. Figure 17 shows the joints angles  The results show that the M-FABRIK algorithm can be successfully applied to underwater vehicle manipulator systems with visual feedback control. The algorithm presented fast and coherent solutions, besides being simple to implement.

Conclusions
This paper presents a visual servoing approach for underwater vehicle manipulator systems based on M-FABRIK. The approach is able to set the manipulator configuration as well as the ROV position in a simple and fast way, allowing underwater interventions in situations in which the ROV is equipped with a camera and cannot be parked.
This method also allows constraining the ROV position according to different criteria the user might be interested in, such as avoiding contact with obstacles, keeping a certain distance from the seabed, adjusting the vehicle camera view, respecting joint limits or even increasing manipulability.
Simplicity, few algorithm iterations and the low computational cost are the main advantages of the proposed approach. Furthermore, the method is robust to uncertainties or disturbances. These features allow the use of the method to solve practical problems in real-time applications, as presented in the performed experiment.
The system feasibility and applicability are demonstrated on a manipulator mounted on a work-class ROV for a monitoring task of multiple targets. The algorithm was able to set the configuration of the manipulator as well as the ROV position during the whole experiment successfully, keeping the targets always in the field of view of an eyein-hand camera and being able to capture images of each fiducial marker in the centre of the frame. Results also show the robustness of the proposed approach to deal with practical problems and highlight the ability of the algorithm to deal with additional requirements, with the ROV being positioned at a safe distance from the wall during the entire experiment.
Different requirements can also be implemented along with the method and different tasks can be successfully accomplished using the proposed system. Therefore, this paper presented a new approach to control UVMS systems, which is simple to implement, fast, allows the addition of different requirements and that works for work-class UVMS. This approach is suitable for real-time applications, including visual servoing feedback, such as for monitoring, tracking and intervention tasks underwater.
Future works may be concentrated on avoiding disturbances caused by the displacement of the centre of mass when frontal movements are performed by the manipulator, and in speeding up the communication rate between the system and the manipulator, in order to void oscillations and achieve even better performances. In addition, new experiments may be performed in a marine environment rather than fresh water and for different tasks.

Declarations
Ethics approval This study did not require ethics approval.

Consent to Participate
Not applicable. This study did not involve human subjects.

Consent for Publication
The authors affirm that the individuals present in Fig. 7 are all on the list of authors of this study and they informed consent for publication of the image.

Competing interests
The authors have no relevant financial or nonfinancial interests to disclose.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.