Skip to main content

Advertisement

Log in

Intelligent optimization algorithms for control error compensation and task scheduling for a robotic arm

  • Regular Paper
  • Published:
International Journal of Intelligent Robotics and Applications Aims and scope Submit manuscript

Abstract

A task scheduling and error control optimization method for robotic arms was developed. The arm’s accuracy after optimization with particle swarm optimization, artificial bee colony, grey wolf optimizer, the genetic algorithm, differential evolution algorithm, and the bat algorithm was compared to identify the best optimization method. Task scheduling was optimized by identifying the optimal paths to each target object. The method can control positioning error, enabling the robotic arm to reach its target coordinates with the smallest error despite being affected by interference during navigation. The proposed method was verified in virtual environments with varying target objects at different locations. The estimation results and convergence speed of each algorithm were compared to identify the most accurate algorithm. The proposed method could be used to improve the task scheduling and error control of robotic arms. The method could also be used in combination with algorithms in accordance with the requirements of practical scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18

Similar content being viewed by others

References

  • Bojie, Y., Kaijiao, Y., Yundong, G., et al.: Technical life assessment of main equipment in power grid considering the operating state and aging mechanism. In: 2023 5th International Conference on Power and Energy Technology (ICPET), pp. 837–842. IEEE (2023)

  • Chen, X., Zhan, Q.: The kinematic calibration of an industrial robot with an improved beetle swarm optimization algorithm. IEEE Robot. Autom. Lett. 7, 4694–4701 (2022). https://doi.org/10.1109/LRA.2022.3151610

    Article  Google Scholar 

  • Das, S., Suganthan, P.N.: Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15, 4–31 (2011). https://doi.org/10.1109/TEVC.2010.2059031

    Article  Google Scholar 

  • Derrac, J., García, S., Molina, D., Herrera, F.: A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms. Swarm Evol. Comput. 1, 3–18 (2011). https://doi.org/10.1016/j.swevo.2011.02.002

    Article  Google Scholar 

  • Du, J., Cai, C., Zhang, P., Tan, J.: Path planning method of robot arm based on improved RRT* algorithm. In: 2022 5th International Conference on Robotics, Control and Automation Engineering (RCAE), pp. 236–241. IEEE (2022)

  • Furqan, M., Rathi, M.: Industrial robotic claw for cottage industries. In: 2019 2nd International Conference on Computing, Mathematics and Engineering Technologies (iCoMET), pp. 1–6. IEEE (2019)

  • Gutiérrez, C.A.G., Reséndiz, J.R., Santibáñez, J.D.M., Bobadilla, G.M.: A model and simulation of a five-degree-of-freedom robotic arm for mechatronic courses. IEEE Lat. Am. Trans. 12, 78–86 (2014). https://doi.org/10.1109/TLA.2014.6749521

    Article  Google Scholar 

  • He, Z., Zhang, R., Zhang, X., et al.: Absolute positioning error modeling and compensation of a 6-DOF industrial robot. In: 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 840–845. IEEE (2019)

  • Hsiao, J.-C., Shivam, K., Lu, I.-F., Kam, T.-Y.: Positioning accuracy improvement of industrial robots considering configuration and payload effects via a hybrid calibration approach. IEEE Access 8, 228992–229005 (2020). https://doi.org/10.1109/ACCESS.2020.3045598

    Article  Google Scholar 

  • Jacob, S., Menon, V.G., Al-Turjman, F., et al.: Artificial muscle intelligence system with deep learning for post-stroke assistance and rehabilitation. IEEE Access 7, 133463–133473 (2019). https://doi.org/10.1109/ACCESS.2019.2941491

    Article  Google Scholar 

  • Karaboga, D., Akay, B.: Artificial bee colony (ABC) algorithm on training artificial neural networks. In: 2007 IEEE 15th Signal Processing and Communications Applications, pp. 1–4. IEEE (2007)

  • Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of ICNN’95—International Conference on Neural Networks, pp. 1942–1948. IEEE (1995)

  • Kuo, P.-H., Syu, M.-J., Yin, S.-Y., Liu, H.-H., Zeng, C.-Y., Lin, W.-C., Yau, H.-T.: Experimental result. (2023). https://youtu.be/rLtrDVgBl-8. Accessed 8 Sept 2023

  • Lee, K.Y.: Tutorial on intelligent optimization and control for power systems: an introduction. In: Proceedings of the 13th International Conference on Intelligent Systems Application to Power Systems, ISAP’05, pp. 2–5. IEEE (2005)

  • Li, G., Shi, B., Liu, R., Gu, J.: Error modeling and compensation for a 6-DOF robotic crusher based on genetic algorithm. In: 2020 International Conference on Big Data & Artificial Intelligence & Software Engineering (ICBASE), pp. 334–337. IEEE (2020)

  • Li, D., Wang, L., Cai, J., et al.: Research on terminal distance index-based multi-step ant colony optimization for mobile robot path planning. IEEE Trans. Autom. Sci. Eng. 20, 2321–2337 (2023). https://doi.org/10.1109/TASE.2022.3212428

    Article  CAS  Google Scholar 

  • Lin, S., Li, F., Li, X., et al.: Improved artificial bee colony algorithm based on multi-strategy synthesis for UAV path planning. IEEE Access 10, 119269–119282 (2022). https://doi.org/10.1109/ACCESS.2022.3218685

    Article  Google Scholar 

  • Liu, J., Wei, X., Huang, H.: An improved Grey Wolf Optimization algorithm and its application in path planning. IEEE Access 9, 121944–121956 (2021). https://doi.org/10.1109/ACCESS.2021.3108973

    Article  Google Scholar 

  • Liu, H., Chen, Q., Pan, N., et al.: UAV stocktaking task-planning for industrial warehouses based on the improved hybrid differential evolution algorithm. IEEE Trans. Ind. Inform. 18, 582–591 (2022). https://doi.org/10.1109/TII.2021.3054172

    Article  Google Scholar 

  • Matuga, M.: Control and positioning of robotic arm on CNC cutting machines and their applications in industry. In: 2018 Cybernetics & Informatics (K&I), pp. 1–6. IEEE (2018)

  • Mirjalili, S., Mirjalili, S.M., Lewis, A.: Grey Wolf Optimizer. Adv. Eng. Softw. 69, 46–61 (2014). https://doi.org/10.1016/j.advengsoft.2013.12.007

    Article  Google Scholar 

  • Peng, T., Zhang, T., Sun, Z.: Research on robot accuracy compensation method based on modified Grey Wolf Algorithm. In: 2023 8th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS), pp. 1–6. IEEE (2023)

  • Radeaf, H.S., Al-Faiz, M.Z.: Inverse kinematics optimization for humanoid robotic legs based on particle swarm optimization. In: 2023 15th International Conference on Developments in eSystems Engineering (DeSE), pp. 94–99. IEEE (2023)

  • Sharkawy, A-N., Koustoumpardis, P.N., Aspragathos N.: A recurrent neural network for variable admittance control in human–robot cooperation: simultaneously and online adjustment of the virtual damping and Inertia parameters. Int. J. Intell. Robot. Appl. 4, 441–464 (2020). https://doi.org/10.1007/s41315-020-00154-z

    Article  Google Scholar 

  • Su, C., Zhang, B., Li, Y.: Multi-body collaborative scheduling strategy based on Bessel curve and Grey Wolf Algorithm. In: 2023 12th International Conference of Information and Communication Technology (ICTech), pp. 241–248. IEEE (2023)

  • Sunantasaengtong, P., Chivapreecha, S.: Mixed K-means and GA-based weighted distance fingerprint algorithm for indoor localization system. In: TENCON 2014—2014 IEEE Region 10 Conference, pp. 1–5. IEEE (2014)

  • Tamizi, M.G., Yaghoubi, M., Najjaran, H.: A review of recent trend in motion planning of industrial robots. Int. J. Intell. Robot. Appl. 7, 253–274 (2023). https://doi.org/10.1007/s41315-023-00274-2

    Article  Google Scholar 

  • Tang, J., Liu, G., Pan, Q.: A review on representative swarm intelligence algorithms for solving optimization problems: applications and trends. IEEE/CAA J. Autom. Sin. 8, 1627–1643 (2021). https://doi.org/10.1109/JAS.2021.1004129

    Article  MathSciNet  Google Scholar 

  • Tang, S., Cheng, X., Zhou, P., et al.: Compensation method of robotic arm positioning error under extreme cold and large temperature difference based on BP neural network. In: 2022 IEEE International Conference on Unmanned Systems (ICUS), pp. 128–135. IEEE (2022)

  • Thor, M., Manoonpong, P.: Error-based learning mechanism for fast online adaptation in robot motor control. IEEE Trans. Neural Netw. Learn. Syst. 31, 2042–2051 (2020). https://doi.org/10.1109/TNNLS.2019.2927737

    Article  PubMed  Google Scholar 

  • Wang, H.: Continuum robot path planning based on improved genetic algorithm. In: 2022 2nd International Conference on Algorithms, High Performance Computing and Artificial Intelligence (AHPCAI), pp. 23–29. IEEE (2022)

  • Wang, G., Guo, L., Duan, H., et al.: A bat algorithm with mutation for UCAV path planning. Sci. World J. 2012, 1–15 (2012). https://doi.org/10.1100/2012/418946

    Article  Google Scholar 

  • Wang, Y., Bai, P., Liang, X., et al.: Reconnaissance mission conducted by UAV swarms based on distributed PSO path planning algorithms. IEEE Access 7, 105086–105099 (2019). https://doi.org/10.1109/ACCESS.2019.2932008

    Article  Google Scholar 

  • Wang, Q., Wang, Z., Shuai, M.: Trajectory planning for a 6-DoF manipulator used for orthopaedic surgery. Int. J. Intell. Robot. Appl. 4, 82–94 (2020). https://doi.org/10.1007/s41315-020-00117-4

    Article  Google Scholar 

  • Wu, Z., Chen, S., Han, J., et al.: A low-cost digital twin-driven positioning error compensation method for industrial robotic arm. IEEE Sens. J. 22, 22885–22893 (2022). https://doi.org/10.1109/JSEN.2022.3213428

    Article  ADS  Google Scholar 

  • Ye, L., Zheng, D.: Stable grasping control of robot based on particle swarm optimization. In: 2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), pp. 1020–1024. IEEE (2021)

  • Zanchettin, A.M., Messeri, C., Cristantielli, D., Rocco, P.: Trajectory optimisation in collaborative robotics based on simulations and genetic algorithms. Int. J. Intell. Robot. Appl. 6, 707–723 (2022). https://doi.org/10.1007/s41315-022-00240-4

    Article  Google Scholar 

  • Zhan, X., Chen, Z.: Path planning of service robot based on improved particle swarm optimization algorithm. In: 2023 4th International Symposium on Computer Engineering and Intelligent Communications (ISCEIC), pp. 244–248. IEEE (2023)

  • Zhao, Y., Zhou, D., Piao, H., et al.: Cooperative multiple task assignment problem with target precedence constraints using a waitable path coordination and modified genetic algorithm. IEEE Access 9, 39392–39410 (2021). https://doi.org/10.1109/ACCESS.2021.3063263

    Article  Google Scholar 

  • Zhou, X., Gao, F., Fang, X., Lan, Z.: Improved Bat algorithm for UAV path planning in three-dimensional space. IEEE Access 9, 20100–20116 (2021). https://doi.org/10.1109/ACCESS.2021.3054179

    Article  Google Scholar 

  • Zhou, Z., Zhao, J., Zhang, Z., Li, X.: Motion planning of dual-chain manipulator based on artificial Bee colony algorithm. In: 2023 9th International Conference on Control, Automation and Robotics (ICCAR), pp. 55–60. IEEE (2023)

Download references

Acknowledgements

We would like to thank all our research assistants for their help.

Funding

This work is supported by the National Science and Technology Council, Taiwan, under Grants NSTC 111-2218-E-194-007, NSTC 112-2218-E-194-006, MOST 111-2823-8-194-002, MOST 111-2221-E-194-052, MOST 109-2221-E-194-053-MY3, and NSTC 112-2221-E-194-032. This work was financially partially supported by the Advanced Institute of Manufacturing with High-tech Innovations (AIM-HI) from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.

Author information

Authors and Affiliations

Authors

Contributions

Her-Terng Yau and Ping-Huan Kuo designed this study. Min-Jhih Syu, Shuo-Yi Yin, Han-Hao Liu, Chao-Yi Zeng, and Wei-Chih Lin built the program, collect the data, and perform the experiments. Her-Terng Yau, Ping-Huan Kuo, Min-Jhih Syu, Shuo-Yi Yin, Han-Hao Liu, Chao-Yi Zeng, and Wei-Chih Lin wrote the paper. All authors contributed, reviewed and approved the manuscript.

Corresponding author

Correspondence to Her-Terng Yau.

Ethics declarations

Conflict of Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 PSO

PSO was proposed by Kennedy and Eberhart (1995). This algorithm is based on the movement of bird flocks, and the interactions among individual birds affect the overall flight path. Each bird changes its flight path on the basis of its own experience and the collective experience of the entire flock to gradually approach the destination.

Using PSO to solve an optimization problem requires initializing numerous variables, including the population positions and the weights of the individual experience and group experience parameters. A random number is then assigned to each particle, which individually yields a solution. This solution is considered the optimal solution of the individual particle, and a global optimal solution is selected from all individual optimal solutions. The entire population is affected by both individual experience and group experience. For each particle, these are separately multiplied by a random number and then summed to determine its acceleration coefficient. This coefficient and the current optimal solution of the particle are then summed to obtain a new solution for the particle. The position of the particle is then evaluated to confirm that it is within the solution space. The new solution is then compared with the previous solution, and the superior solution becomes the particle’s new solution. If the new solution was superior, it is then compared with the optimal group solution, and the superior of the two becomes the optimal group solution. This process is repeated until the maximum iteration number is reached. Figure 19 presents a flowchart of the PSO algorithm.

Fig. 19
figure 19

Flowchart of the PSO algorithm

The position and speed of a particle in each iteration are updated using (7) and (8), respectively. In these equations, k represents the number of iterations, w represents the inertia weight, j represents the particle speed, c1 and c2 are learning factors, \(r_{1j}^{k}\) and \(r_{2j}^{k}\) are random values from the interval [0, 1], pbest represents the personal best solution of the particle, and gbest represents the global best solution, which is the optimal solution for pbest (Wang et al. 2019).

$$Pos_{ij}^{k + 1} = Pos_{ij}^{k} + Vel_{ij}^{k + 1}$$
(7)
$$Vel_{ij}^{k + 1} = {\text{w}}Vel_{ij}^{k} + {\text{c}}_{1} {\text{r}}_{1j}^{k} \left( {{\text{pbest}}_{ij}^{k} - Pos_{ij}^{k} } \right) + {\text{c}}_{2} {\text{r}}_{2j}^{k} \left( {{\text{gbest}}_{j}^{k} - {\text{Pos}}} \right)$$
(8)

1.2 ABC

ABC was proposed by Karaboga and Akay (2007) and is based on the foraging behavior of honeybees. Individual bees perform different tasks and exchange information to identify the optimal solution. The bees act as hired bees, observation bees, and scout bees. Each hired bee is associated with a specific source of honey. When a source is depleted, the hired bee becomes an observation bee. Observation bees select a new source of honey on the basis of the information shared by hired bees, and scout bees explore random honey sources. A honey source can be considered a potential optimal solution to a problem.

After the honey sources have been initialized, a swarm of bees is generated, and the honey source locations and maximum number of searches are determined. The richness of each honey source is then calculated, and each bee begins its work. Hired bees retain the current optimal sources and record the number of undesirable sources. Observation bees receive information shared by hired bees and select the optimal honey sources with the roulette wheel strategy. This step ensures that honey sources with the highest fitness values are foraged. When observation bees search for optimal honey sources, they assign each honey source a limit variable denoting the number of unsuccessful attempts to forage the source. When a source is foraged successfully, limit = 0; otherwise, limit is increased by 1. When multiple attempts to forage a source are unsuccessful, scout bees mark the source as a nonoptimal solution; this occurs when the number of searches in a source exceeds the preset limit. Accordingly, the source is abandoned, and a new source is generated randomly with its limit variable reset to 0. This process is repeated until the maximum iteration number is reached. Figure 20 presents a flowchart of the ABC algorithm.

Fig. 20
figure 20

Flowchart of the ABC algorithm

The limit of the local optimal solution is defined during the initialization of ABC. Equation (9) is used to update the honey source location in each iteration. In this equation, d represents the dimension of the function, and \(Ma{\text{x}}_{d}\) and \(Mi{\text{n}}_{d}\) represent the upper and lower limits of d, respectively.

$${\text{x}}_{id}^{0} = Mi{\text{n}}_{d} + rand\left( {0,1} \right) \times \left( {{\text{Max}}_{d} - {\text{Min}}_{d} } \right)$$
(9)

Observation bees find new honey sources according to (10), in which k = 1, 2, …, N, k ≠ i, and l is a random number from the interval [− 1, 1] and represents the degree to which an individual bee is affected by the other bees. New and old honey sources are optimized according to the greedy algorithm, with the relevant fitness function expressed in (11).

$$x_{id}^{t + 1} = x_{id}^{t} + l \times \left( {x_{id}^{t} - x_{kd}^{t} } \right)$$
(10)
$$fitness_{i} = \left\{ {\begin{array}{*{20}c} { 1/\left( {1 + fit_{i} } \right),\, fit_{i} \ge 0} \\ {1 + \left| {fit_{i} } \right|, \,fit_{i} < 0} \\ \end{array} } \right.$$
(11)

Scout bees select honey sources according to the roulette wheel method to ensure that sources with higher fitness values are harvested more easily, with the collection probability calculated using (12). If the honey source changes, limit = 0; otherwise, the limit will be added by one (Lin et al. 2022).

$$Pc_{i} = \frac{{fitness_{i} }}{{\mathop \sum \nolimits_{i = 0}^{Ne} \,fitness_{i} }}$$
(12)

1.3 GWO

GWO was proposed by Mirjalili et al. (2014). The algorithm mimics the hunting behavior of grey wolves, which follow a leadership hierarchy and undertake different hunting tasks. In the algorithm, grey wolves are divided into four hierarchical levels. The first level is led by α wolves, which are in charge of making crucial decisions; other wolves must obey the α wolves. The second level comprises β wolves, which assist the α wolves in decision-making and become new α wolves when old α wolves pass away. The β wolves command other lower-level wolves. The third level comprises δ wolves, which must obey the wolves in higher levels and command the ω wolves in the lowest level. These ω wolves must obey the wolves in other levels, ensuring the harmony of the entire group.

The GWO algorithm begins by establishing a society, and the fitness value of every wolf in each pack is calculated to identify the three wolves with the highest fitness value. These three wolves are sequentially labeled as α, β, and δ; the remaining wolves in the pack are labeled as ω. After the leadership hierarchy has been established, the optimization process is mainly led by the α, β, and δ wolves in each pack to identify the top three optimal solutions.

In the process of encircling prey, the α, β, and δ wolves command other wolves in the same pack to gradually approach it. During each search for prey, the top three optimal solutions (i.e., the α, β, and δ wolves) are retained, and their information is used to update the positions for other searches. When the α, β, and δ wolves estimate the location of prey, the ω wolves update their own locations according to the commands of the top wolves.

In the hunting stage, the prey stops moving, and the wolf pack attacks it. The wolf pack moves in a search area with a specific radius. When the prey is found in this area, it is attacked by the wolves. Figure 21 presents a flowchart of the GWO algorithm.

The prey-searching behavior of gray wolves is expressed in (13), in which D represents the distance between a gray wolf and the target prey, t represents the number of iterations, \(X_{p} \left( t \right)\) represents the current position of the prey, and \(X\left( t \right)\) represents the current Euclidean vector of the gray wolf. In (14), \(X\left( {t + 1} \right)\) represents the updated Euclidean vector of the gray wolf in the next iteration. The coefficients of A and C are expressed in (15) and (16), respectively, in which \(r_{1}\) and \(r_{2}\) are random vectors from the interval [0, 1]. In (17), a decreases linearly from 2 to 0 during the iteration process; accordingly, A ranges from − 2 to 2, and C ranges from 0 to 2.

$$D = \left| {C \cdot X_{p} \left( t \right) - X\left( t \right)} \right|$$
(13)
$$X\left( {t + 1} \right) = X_{p} \left( t \right) - A \cdot D$$
(14)
$$A = 2ar_{1} - a$$
(15)
$$C = 2r_{2}$$
(16)
$$a = 2\left( {1 - t/T} \right)$$
(17)

When the pack surrounds the prey, the α wolf leads the β and δ wolves to approach the prey. Equations (18)–(23) are used to calculate the positions of the wolves and prey and to update the Euclidean vector of each wolf, and (24) is adopted to verify the strategy of the pack in rounding up the prey. In the aforementioned equations, \(X_{\alpha } \left( t \right)\), \(X_{\beta } \left( t \right)\), and \(X_{\delta } \left( t \right)\) represent the Euclidean vectors of the α, β, and δ wolves, respectively. The terms \(C_{1}\), \(C_{2}\), and \(C_{3}\), which are calculated using (16), range between 0 and 2.

$$D_{\alpha } = \left| {C_{1} \cdot X_{\alpha } \left( t \right) - X\left( t \right)} \right|$$
(18)
$$D_{\beta } = \left| {C_{2} \cdot X_{\beta } \left( t \right) - X\left( t \right)} \right|$$
(19)
$$D_{\delta } = \left| {C_{3} \cdot X_{\delta } \left( t \right) - X\left( t \right)} \right|$$
(20)
$$X_{1} = X_{\alpha } \left( t \right) - A_{1} \cdot D_{\alpha }$$
(21)
$$X_{2} = X_{\beta } \left( t \right) - A_{2} \cdot D_{\beta }$$
(22)
$$X_{3} = X_{\delta } \left( t \right) - A_{3} \cdot D_{\delta }$$
(23)
$${\text{X}}\left( {{\text{t}} + 1} \right) = \frac{{{\text{X}}_{1} + {\text{X}}_{2} + {\text{X}}_{3} }}{3}$$
(24)

When the prey stops moving, the pack begins attacking it. In the iteration process, A decreases from 2 to 0 with changes in a, which represents the process of attacking the prey 21. When \(\left| A \right| > 1\), an individual wolf is far away from the prey and is conducting a global search. When \(\left| A \right| \le 1\), the wolf begins attacking the prey (Liu et al. 2021).

Fig. 21
figure 21

Flowchart of GWO

1.4 GA

The fundamental concept of the GA was proposed by John Holland and his students in 1975 (Sunantasaengtong and Chivapreecha 2014). The algorithm is based on the theory of natural selection proposed by Charles Darwin. In accordance with the logic of survival of the fittest, the algorithm undergoes multiple iterations of updating until the solution converges.

The gene manipulation process of the algorithm is divided into three stages, namely selection, reproduction, and crossover and mutation. The algorithm involves different populations, chromosomes, individuals, and genes. A population is a set of chromosomes, each of which comprises multiple genes. Each chromosome is an array of numbers that stores the parameter values (genes) of a solution. Therefore, each chromosome represents a potential solution of a problem.

The initialization process begins by randomly selecting multiple chromosomes to form a population. The chromosomes with high fitness values are then selected to undergo reproduction, crossover, and mutation, generating the next generation of chromosomes. These new chromosomes are screened to select and retain those with high fitness value as the next generation of the population. This inheritance and evolution process is repeated to gradually create chromosomes with extreme high fitness values, eventually converging to the optimal chromosome. Figure 22 presents a flowchart of the GA.

Fig. 22
figure 22

Flowchart of the GA

1.5 DE

DE was proposed by Stone and Price in 1995 (Das and Suganthan 2011). It uses vectors to examine the differences between populations. A random search is performed to add difference vectors to populations, which are evaluated over multiple iterations of mutation, recombination, and selection. This algorithm is similar to the GA. However, it performs mutation before crossover to yield more diverse solutions than does the GA. Accordingly, the problem-solving capacity of the DE algorithm is greater than that of the GA.

In the initialization process, randomly generated vectors are used to form a population, which then undergoes mutation. Three vectors are randomly selected from the population to synthesize a new vector, which undergoes recombination wherein the synthesized vector is combined with other vectors selected from population, forming another new vector. Next, a selection process occurs in which the recombined and original vectors in the population are compared; those with higher fitness values are selected for the next iteration. Figure 23 presents a flowchart of DE.

The position of an individual is verified using (25). A total of N individuals are randomly generated in the search space, with each individual being represented by one D-dimensional vector.

$$X_{i} = \left\{ {\left( {x_{1} ,y_{1} ,z_{1} } \right),\left( {x_{2} ,y_{2} ,z_{2} } \right), \cdots \cdots ,\left( {x_{D} ,y_{D} ,z_{D} } \right)} \right\}$$
(25)

The population is updated using (26)–(28), in which \(F\) is a mutation vector; \(X_{{r_{1} }}^{k}\), \(X_{{r_{2} }}^{k}\), and \(X_{{r_{3} }}^{k}\) are randomly selected individuals; rand is a random number from the interval [0, 1]; \(CR\) represents the crossover operator; \({\text{rand }}i\left( {\left[ {1,k} \right]} \right)\) represents a random number from 1 to k; and fitness represents the fitness of an individual (Liu et al. 2022).

$$V_{i}^{k} = X_{{r_{1} }}^{k} + F \times \left( {X_{{r_{2} }}^{k} - X_{{r_{3} }}^{k} } \right)$$
(26)
$$\begin{gathered} C_{i}^{k} \left( j \right) = \left\{ {\begin{array}{*{20}c} {V_{i}^{k} \left( j \right){\text{ rand}} \le CR or j = {\text{rand }}i\left( {\left[ {1,k} \right]} \right)} \\ {X_{i}^{k} \left( j \right){\text{ rand}} \ge CR or j \ne {\text{rand }}i\left( {\left[ {1,k} \right]} \right)} \\ \end{array} } \right. \hfill \\ j = \left( {1,2, \ldots k} \right) \hfill \\ \end{gathered}$$
(27)
$${\text{X}}_{{\text{i}}}^{{{\text{k}} + 1}} = \left\{ {\begin{array}{*{20}c} {{\text{X}}_{{\text{i}}}^{{\text{k}}} {\text{ fitness}}\left( {{\text{X}}_{{\text{i}}}^{{\text{k}}} } \right) \le {\text{fitness}}\left( {{\text{C}}_{{\text{i}}}^{{\text{k}}} } \right)} \\ {{\text{C}}_{{\text{i}}}^{{\text{k}}} {\text{ fitness}}\left( {{\text{X}}_{{\text{i}}}^{{\text{k}}} } \right) \le {\text{fitness}}\left( {{\text{C}}_{{\text{i}}}^{{\text{k}}} } \right)} \\ \end{array} } \right.$$
(28)
Fig. 23
figure 23

Flow chart of DE

1.6 BA

The BA was proposed by Yang 2010 (Wang et al. 2012). The algorithm was inspired by the echolocation used by bats to hunt prey, avoid obstacles, and locate suitable habitats. The BA mimics the hunting movements of a groups of bats, which use echolocation to search multiple locations and identify specific targets. The initial search area is large but is gradually reduced to approach the target.

An objective function for optimization is first used to determine the advantages and disadvantages of each bat’s position. Each bat searches and moves in the process of identifying the optimal solution. As the bats use supersonic waves to search for prey, their velocity and position are updated. They initially emit pulses with high loudness and low frequency to search a wide area and attempt to locate prey; however, the loudness of subsequent pulses is gradually reduced and the frequency is increased to determine the precise location of the prey. Figure 24 presents a flowchart of the BA.

The bats perceive distances through echolocation to search for prey and avoid obstacles (Liu et al. 2022). When hunting, they automatically adjust the wavelength and frequency of the emitted pulse and continuously adjust the pulse emission frequency according to their proximity to the target. The frequency (\(f_{i}\)), speed (\(V_{i}^{t}\)), and new solution (\(X_{i}^{t}\)) of the ith bat at time step t are calculated using (29)–(31) respectively. In these equations, \(\beta\) is a random number from the interval [0, 1], and \(X^{*}\) represents the optimal solution between the first and (t − 1)th time steps and is updated only after the \(X_{i}^{t}\) values of all bats have been confirmed. Because each bat differs in its ultrasound frequency, \(f_{i}\) represents a randomly assigned frequency from 0 to 100.

$$f_{i} = f_{min} + \left( {f_{max} - f_{min} } \right)\beta$$
(29)
$$V_{i}^{t} = V_{i}^{t - 1} + \left( {X_{i}^{t - 1} - X^{*} } \right)f_{i}$$
(30)
$$X_{i}^{t} = X_{i}^{t - 1} + V_{i}^{t}$$
(31)

In local search, a random number (\(r_{1}\)) ranging from 0 to 1 is generated. If the r1 value is greater than \(r_{i}\), the new solution \(X_{new}\) replaces the original solution \(X_{i}^{t}\). This new solution is randomly generated from the current optimal solution as expressed in (32), in which \(\varepsilon\) is a random number ranging from − 1 to 1 and \(A^{t}\) represents the average ultrasound amplitude of all the bats at step time t.

$$X_{new} = X^{*} + \varepsilon A^{t}$$
(32)

Another random number (\(r_{2}\)) that ranges from 0 to 1 is also generated. If \(r_{2}\) < \(A_{i}^{t}\) and the fitness of the new solution \(f\left( {X_{i}^{t} } \right)\) is smaller than that of the current optimal solution \(f\left( {X^{*} } \right)\), then \(A_{i}^{t + 1}\) and \(r_{i}^{t + 1}\) are updated using (33) and (34), respectively, in which \(\alpha\), \(\gamma\), and \(r^{0}\) are constants.

$$A_{i}^{t + 1} = \alpha A_{i}^{t}$$
(33)
$$r_{i}^{t + 1} = r^{0} \left[ {1 - exp\left( { - \gamma t} \right)} \right]$$
(34)
Fig. 24
figure 24

Flowchart of the BA

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kuo, PH., Syu, MJ., Yin, SY. et al. Intelligent optimization algorithms for control error compensation and task scheduling for a robotic arm. Int J Intell Robot Appl (2024). https://doi.org/10.1007/s41315-024-00328-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s41315-024-00328-z

Keywords

Navigation