Advertisement

Wireless Personal Communications

, Volume 108, Issue 4, pp 2241–2260 | Cite as

Classification of Sonar Targets Using an MLP Neural Network Trained by Dragonfly Algorithm

  • Mohammad KhisheEmail author
  • Abbas Safari
Article

Abstract

Due to the compatibility of the designed classifiers with MLP Neural Networks (MLP NNs), in this article, MLP NNs have been used to identify and classify active and passive sonar targets. On the one hand, the great importance of precise and immediate classification of sonar targets, and on the other hand, being trapped in local minimums and the low convergence speed in classic MLP NNs have led the newly proposed Dragonfly Algorithm (DA) to be offered for training MLP NNs. In order to assess the performance of the designed classifier, this algorithm have been compared with BBO, GWO, ALO, ACO, GSA and MVO algorithms in terms of precision of classification, convergence speed and the ability to avoid local optimum. To have a comprehensive comparison, the three sets of active and passive data were used. Simulation results indicate that DA-based classification have better results in all three datasets compared to benchmark algorithms.

Keywords

Sonar Classification Dragonfly Multi-layer perceptron neural network 

Notes

References

  1. 1.
    Mosavi, M. R., Khishe, M., & Ebrahimi, E. (2015). Classification of sonar targets using OMKC, genetic algorithms and statistical moments. Journal of Advances in Computer Research, 7(1), 50–59.Google Scholar
  2. 2.
    Mosavi, M. R., Khishe, M., Aghababaei, M., & Mohammadzadeh, F. (2015). Approximation of active sonar Clutter’s statistical parameters using array’s effective beam-width. Iranian Journal of Marine Science and Technology, 73(1), 11–22.Google Scholar
  3. 3.
    Simon, D. (2008). Biogeography-based optimization. IEEE Ttransactions on Evolutionary on Computation, 12(6), 702–713.CrossRefGoogle Scholar
  4. 4.
    Auer, P., Burgsteiner, H., & Maass, W. (2008). A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21(5), 786–795.zbMATHCrossRefGoogle Scholar
  5. 5.
    Mirjalili, S. M., & Mirjalili, S. (2014). Oval-shaped-hole photonic crystal waveguide design by MoMIR framework. Photonics Technol Letters Issue, 99, 1.Google Scholar
  6. 6.
    Mirjalili, S. M., Mirjalili, S., & Lewis, A. (2014). A novel multi-objective optimization framework for designing photonic crystal waveguides. Photonics Technol Letters, 26, 146–149.CrossRefGoogle Scholar
  7. 7.
    Ng, S. C., Cheung, C. C., Leung, S. H., & Luk, A. (2003). Fast convergence for backpropagation network with magnified gradient function. IEEE Joint Conference on Neural Networks, 3, 1903–1908.Google Scholar
  8. 8.
    Magoulas, G., Vrahatis, M., & Androulakis, G. (1997). On the alleviation of the problem of local minima in backpropagation. Nonlinear Analysis, Theory, Methods & Applications, 30(7), 4545–4550.MathSciNetzbMATHCrossRefGoogle Scholar
  9. 9.
    Ho, Y. C., & Pepyne, D. L. (2002). Simple explanation of the no-free-lunch theorem and its implications. Journal of Optimization Theory and Applications, 115(3), 549–570.MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Wang, P., Yu, X., & Lu, J. (2014). Identification and evolution of structurally dominant nodes in protein-protein interaction networks. IEEE Transactions on Biomedical Circuits and Systems, 8(1), 87–97.CrossRefGoogle Scholar
  11. 11.
    Gudise, V. G., & Venayagamoorthy, G. K. (2003) Comparison of particle swarm optimization and backpropagation as training algorithms for neural networks. In IEEE Swarm intelligence symposium (pp. 110–117).Google Scholar
  12. 12.
    Culberson, J. L. (1998). On the futility of blind search: An algorithmists view of the no free lunch. Evolutionary Computation, 6, 109–127.CrossRefGoogle Scholar
  13. 13.
    Ho, Y. C., & Pepyne, D. L. (2002). Simple explanation of the no-free-lunch theorem of optimization. Cybernetics and Systems Analysis, 38(2), 292–298.MathSciNetzbMATHCrossRefGoogle Scholar
  14. 14.
    Montana, D. J., & Davis, L. (1989). Training feed-forward neural networks using genetic algorithms. In 11th International Joint Conference on Artificial Intelligence, 1, 762–767.zbMATHGoogle Scholar
  15. 15.
    Shaw, S., & Kinsner, W. (1996). Chaotic simulated annealing in multilayer feedforward networks. Canadian Conference on Electrical and Computer Engineering, 1, 265–269.Google Scholar
  16. 16.
    Chang, S. K., Mohammed, O. A., & Hahn, S. Y. (1994). Detection of magnetic body using article neural network with modified simulated annealing. IEEE Transactions on Magnetics, 30(5), 3644–3647.CrossRefGoogle Scholar
  17. 17.
    Mirjalili, S., & Sadiq, A. S. (2011) Magnetic Optimization Algorithm for Training Multilayer Perceptron. In: IEEE 3rd International Conference on Communication Software and Networks (ICCSN) (pp. 42–46).Google Scholar
  18. 18.
    Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Let a biogeography-based optimizer train your multi-layer perceptron. Journal of Information Science, 269, 188–209.MathSciNetCrossRefGoogle Scholar
  19. 19.
    Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61.CrossRefGoogle Scholar
  20. 20.
    Auer, P., Burgsteiner, H., & Maass, W. (2008). A learning rule for very simple universal approximators consisting of a single layer of perceptrons. Neural Networks, 21(5), 786–795.zbMATHCrossRefGoogle Scholar
  21. 21.
    Wikelski, M., Moskowitz, D., Adelman, J., & Cochran, J. (2006). Simple rules guide dragonfly migration. Biology Letters, 2, 325–329.CrossRefGoogle Scholar
  22. 22.
    Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. ACM Siggraph Computer Graphics, 21, 25–34.CrossRefGoogle Scholar
  23. 23.
    Yang, X. S. (2010). Nature-inspired metaheuristic algorithms (2nd ed.). Bristol: Luniver Press.Google Scholar
  24. 24.
    Gorman, R. P., & Sejnowski, T. J. (1988). Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks, 1, 75–89.CrossRefGoogle Scholar
  25. 25.
  26. 26.
    Khishe, M., Mosavi, M., & Safarpoor, B. (2017). Sound of propeller. Mendeley Data, v1, 2017. http://dx.doi.org/10.17632/rryjwssc53.1.
  27. 27.
    Gaggero, S., Savio, L., Brizzolara, S., Viviani, M., Ferrando, M., & Conti, F. (2009). An experimental study on measuring and localzating propeller noise behind a body in a cavitations tunnel. Trondheim, Norway: First International Symposium on Marine Propulsors.Google Scholar
  28. 28.
    Carlton, J. (2012). Marine propeller and propulsion (3rd ed.). Oxford: Butterworth-Heinemann.Google Scholar
  29. 29.
    Naseri, M. J. (2015). Floating buoy controller design and implementation by using special sonobuoys. M. S. Thesis, Marine Sciences University of Nowshahr Imam Khomeini.Google Scholar
  30. 30.
    Preston, M. (2004). Resampling sonar echo time series primarily for seabed sediment. United State Patent, US 6,801,474 B2, October 2004.Google Scholar
  31. 31.
    Branke, J. (1995). Evolutionary Algorithms for Neural Network Design and Training. Citeseer.Google Scholar
  32. 32.
    Mendes, R., Cortez, P., Rocha, M., & Neves, J. (2002). Particle Swarms for Feedforward Neural Network Training. In IEEE joint conference on neural networks (pp. 1895–1899).Google Scholar
  33. 33.
    Mirjalili, S. (2011). Hybrid Particle Swarm Optimization and Gravitational Search Algorithm for Multilayer Perceptron Learning. Master: Universiti Teknologi Malaysia (UTM).Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Department of Electrical EngineeringImam Khomeini Marine UniversityNowshahrIran

Personalised recommendations