Advertisement

Cluster Computing

, Volume 20, Issue 3, pp 1881–1897 | Cite as

Parallel high-dimensional multi-objective feature selection for EEG classification with dynamic workload balancing on CPU–GPU architectures

  • Juan José EscobarEmail author
  • Julio Ortega
  • Jesús González
  • Miguel Damas
  • Antonio F. Díaz
Article

Abstract

Many bioinformatics applications that analyse large volumes of high-dimensional data comprise complex problems requiring metaheuristics approaches with different types of implicit parallelism. For example, although functional parallelism would be used to accelerate evolutionary algorithms, the fitness evaluation of the population could imply the computation of cost functions with data parallelism. This way, heterogeneous parallel architectures, including central processing unit (CPU) microprocessors with multiple superscalar cores and accelerators such as graphics processing units (GPUs) could be very useful. This paper aims to take advantage of such CPU–GPU heterogeneous architectures to accelerate electroencephalogram classification and feature selection problems by evolutionary multi-objective optimization, in the context of brain computing interface tasks. In this paper, we have used the OpenCL framework to develop parallel master-worker codes implementing an evolutionary multi-objective feature selection procedure in which the individuals of the population are dynamically distributed among the available CPU and GPU cores.

Keywords

Dynamic task scheduling Multi-objective EEG classification Feature selection GPU Heterogeneous parallel architectures Memory access optimization 

Notes

Acknowledgements

Work funded by Project TIN2015-67020-P (Spanish “Ministerio de Economía y Competitividad” and ERDF funds). We would like to thank the BCI laboratory of the University of Essex, especially prof. John Q. Gan, for allowing us to use their databases, and the anonymous reviewers for their useful comments and suggestions.

References

  1. 1.
    Rupp, R., Kleih, S., Leeb, R., Millan, J., Knbler, A., Mnller-Putz, G.: Brain-computer interfaces and assistive technology. In: Grnbler, G., Hildt, E. (eds.) Brain-Computer-Interfaces in their Ethical, Social and Cultural Contexts. The International Library of Ethics, Law and Technology, pp. 7–38. Springer, New York (2014)Google Scholar
  2. 2.
    Collet, P.: Why gpgpus for evolutionary computation? In: Tsutsui, S., Collet, P. (eds.) Massively Parallel Evolutionary Computation on GPGPUs. Natural Computing Series, pp. 3–14. Springer, New York (2013)CrossRefGoogle Scholar
  3. 3.
    Luong, T., Melab, N., Talbi, E.G.: Gpu-based island model for evolutionary algorithms. In: Proceedings of the 12th Annual Conference on Genetic and Evolutionary Computation. pp. 1089–1096. GECCO’2010, ACM, Portland, OR (2010)Google Scholar
  4. 4.
    Alba, E., Luque, G., Nesmachnow, S.: Parallel metaheuristics: recent advances and new trends. Int. Trans. Oper. Res. 20(1), 1–48 (2013)CrossRefzbMATHGoogle Scholar
  5. 5.
    Pospichal, P., Jaros, J., Schwarz, J.: Parallel genetic algorithm on the cuda architecture. In: Di Chio, C., Cagnoni, S., Cotta, C., Ebner, M., Ekárt, A., Esparcia-Alcazar, A., Goh, C.K., Merelo, J., Neri, F., Preuß, M., Togelius, J., Yannakakis, G. (eds.) Proceedings of the 13th European Conference on the Applications of Evolutionary Computation, pp. 442–451. EvoApplications’2010, Springer, Istambul, Turkey (2010)Google Scholar
  6. 6.
    Escobar, J., Ortega, J., González, J., Damas, M.: Assessing parallel heterogeneous computer architectures for multiobjective feature selection on eeg classification. In: Ortuño, F., Rojas, I. (eds.) Proceedings of the 4th International Conference on Bioinformatics and Biomedical Engineering, pp. 277–289. IWBBIO’2016, Springer, Granada (2016)Google Scholar
  7. 7.
    Escobar, J., Ortega, J., González, J., Damas, M.: Improving memory accesses for heterogeneous parallel multi-objective feature selection on eeg classification. In: Proceedings of the 4th International Workshop on Parallelism in Bioinformatics. PBIO’2016, Springer, Grenoble (2016)Google Scholar
  8. 8.
    Bellman, R.: Adaptive Control Processes: A Guided Tour. Princeton University Press, Princeton (1961)CrossRefzbMATHGoogle Scholar
  9. 9.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  10. 10.
    Mukhopadhyay, A., Maulik, U., Bandyopadhyay, S., Coello Coello, C.: A survey of multiobjective evolutionary algorithms for data mining: Part i. IEEE Trans. Evol. Comput. 18(1), 4–19 (2014)CrossRefGoogle Scholar
  11. 11.
    Mukhopadhyay, A., Maulik, U., Bandyopadhyay, S., Coello Coello, C.: A survey of multiobjective evolutionary algorithms for data mining: Part ii. IEEE Trans. Evol. Comput. 18(1), 20–35 (2014)CrossRefGoogle Scholar
  12. 12.
    Emmanouilidis, C., Hunter, A., MacIntyre, J.: A multiobjective evolutionary setting for feature selection and a commonality-based crossover operator. In: Proceedings of the 2000 Congress on Evolutionary Computation. CEC’2000, vol. 1, pp. 309–316. IEEE, La Jolla, CA (2000)Google Scholar
  13. 13.
    Handl, J., Knowles, J.: Feature subset selection in unsupervised learning via multiobjective optimization. Int. J. Comput. Intell. Res. 2(3), 217–238 (2006)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Morita, M., Sabourin, R., Bortolozzi, F., Suen, C.: Unsupervised feature selection using multi-objective genetic algorithms for handwritten word recognition. In: Proceedings of the Seventh International Conference on Document Analysis and Recognition, pp. 666–670. ICDAR’2013, IEEE (2003)Google Scholar
  15. 15.
    Arbelaitz, O., Gurrutxaga, I., Muguerza, J., PTrez, J., Perona, I.: An extensive comparative study of cluster validity indices. Pattern Recognit. 46(1), 243–256 (2013)CrossRefGoogle Scholar
  16. 16.
    Kimovski, D., Ortega, J., Ortiz, A., Baños, R.: Leveraging cooperation for parallel multi-objective feature selection in high-dimensional eeg data. Concurr. Comput. Pract. Exp. 27(18), 5476–5499 (2015)CrossRefGoogle Scholar
  17. 17.
    Khronos Group: Khronos opencl registry. https://www.khronos.org/registry/cl/. Accessed 30 Nov 2015
  18. 18.
    OpenMP Community: Openmp specifications. http://www.openmp.org/specifications/. Accessed 21 Nov 2016
  19. 19.
    Gunarathne, T., Salpitikorala, B., Chauhan, A., Fox, G.: Optimizing opencl kernels for iterative statistical algorithms on gpus. In: Proceedings of the Second International Workshop on GPUs and Scientific Applications, pp. 33–44. GPUScA’2011, Galveston Island, TX (2011)Google Scholar
  20. 20.
    Dhanasekaran, B., Rubin, N.: A new method for gpu based irregular reductions and its application to k-means clustering. In: Proceedings of the Fourth Workshop on General Purpose Processing on Graphics Processing Units, pp. 729–737. GPGPU-4, ACM, Newport Beach, California (2011)Google Scholar
  21. 21.
    Asensio-Cubero, J., Gan, J., Palaniappan, R.: Multiresolution analysis over simple graphs for brain computer interfaces. J. Neural Eng. 10(4), 046014 (2013)CrossRefGoogle Scholar
  22. 22.
    Daubechies, I.: Ten Lectures on Wavelets. Society for Industrial and Applied Mathematics, Philadelphia, PA (1992)CrossRefzbMATHGoogle Scholar
  23. 23.
    Deb, K., Agrawal, S., Pratap, A., Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: Nsga-ii. In: Proceedings of the 6th International Conference on Parallel Problem Solving from Nature. pp. 849–858. PPSN VI, Springer, Paris (2000)Google Scholar
  24. 24.
    Fonseca, C., L=pez-Ibáñez, M., Paquete, L., Guerreiro, A.: Computation of the hypervolume indicator. http://lopez-ibanez.eu/hypervolume. Accessed 30 Nov 2015
  25. 25.
    Nvidia Corporation: Nvidia tesla k20c datasheet. http://www.nvidia.com/content/tesla/pdf/nvidia-tesla-kepler-family-datasheet.pdf. Accessed 17 May 2017
  26. 26.
    Intel Corporation: Intel xeon processor e5-2600 series specifications. http://download.intel.com/support/processors/xeon/sb/xeon_E5-2600.pdf. Accessed 17 May 2017
  27. 27.
    Jian, L., Wang, C., Liu, Y., Liang, S., Yi, W., Shi, Y.: Parallel data mining techniques on graphics processing unit with compute unified device architecture (cuda). J. Supercomput. 64(3), 942–967 (2013)CrossRefGoogle Scholar
  28. 28.
    Gainaru, A., Slusanschi, E., Trausan-Matu, S.: Mapping data mining algorithms on a gpu architecture: A study. In: Kryszkiewicz, M., Rybinski, H., Skowron, A., Raś, Z.W. (eds.) Proceedings of the 19th International Symposium. Foundations of Intelligent Systems, pp. 102–112. ISMIS’2011, Springer Berlin Heidelberg, Warsaw (2011)Google Scholar
  29. 29.
    Hestness, J., Keckler, S., Wood, D.: Gpu computing pipeline inefficiencies and optimization opportunities in heterogeneous cpu-gpu processors. In: Proceedings of the 2015 IEEE International Symposium on Workload Characterization. pp. 87–97. IISWC’15, IEEE Computer Society, Atlanta, GA (2015)Google Scholar
  30. 30.
    Sharma, D., Collet, P.: Implementation techniques for massively parallel multi-objective optimization. In: Tsutsui, S., Collet, P. (eds.) Massively Parallel Evolutionary Computation on GPGPUs. Natural Computing Series, pp. 267–286. Springer, New York (2013)CrossRefGoogle Scholar
  31. 31.
    Wong, M., Cui, G.: Data mining using parallel multi-objective evolutionary algorithms on graphics processing units. In: Tsutsui, S., Collet, P. (eds.) Massively Parallel Evolutionary Computation on GPGPUs. Natural Computing Series, pp. 287–307. Springer, New York (2013)CrossRefGoogle Scholar
  32. 32.
    Baramkar, P., Kulkarni, D.: Review for k-means on graphics processing units (gpu). Int. J. Eng. Res. Technol. 3(6), 1911–1914 (2014)Google Scholar
  33. 33.
    Kijsipongse, E., U-ruekolan, S.: Dynamic load balancing on gpu clusters for large-scale k-means clustering. In: Proceedings of the 9th International Joint Conference on Computer Science and Software Engineering, pp. 346–350. JCSSE’2012, Bangkok (2012)Google Scholar
  34. 34.
    Farivar, F., Rebolledo, D., Chan, E., Campbell, R.: A parallel implementation of k-means clustering on gpus. In: Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, pp. 340–345. PDPTA’08, Las Vegas, Nevada (2008)Google Scholar
  35. 35.
    Wu, R., Zhang, B., Hsu, M.: Clustering billions of data points using gpus. In: Hast, A., Buchty, R., Tao, J., Weidendorfer, J. (eds.) Proceedings of the Combined Workshops on UnConventional High Performance Computing workshop plus Memory Access Workshop. pp. 1–6. UCHPC-MAW’09, ACM, Ischia (2009)Google Scholar
  36. 36.
    Zechner, M., Granitzer, M.: Accelerating k-means on the graphics processor via cuda. In: Proceedings of the First International Conference on Intensive Applications and Services, pp. 7–15. INTENSIVE’09, IEEE, Valencia (2009)Google Scholar
  37. 37.
    Fazendeiro, P., Padole, C., Sequeira, P., Prata, P.: Opencl implementations of a genetic algorithm for feature selection in periocular biometric recognition. In: Panigrahi, B., Das, S., Suganthan, P., Nanda, P. (eds.) Third International Conference on Swarm, Evolutionary and Memetic Computing, pp. 729–737. SEMCCO’2012, Springer, Bhubaneswar (2012)Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2017

Authors and Affiliations

  • Juan José Escobar
    • 1
    Email author
  • Julio Ortega
    • 1
  • Jesús González
    • 1
  • Miguel Damas
    • 1
  • Antonio F. Díaz
    • 1
  1. 1.Department of Computer Architecture and Technology, CITICUniversity of GranadaGranadaSpain

Personalised recommendations