Skip to main content

Potential Research Directions

  • Chapter
  • First Online:
Optinformatics in Evolutionary Learning and Optimization

Part of the book series: Adaptation, Learning, and Optimization ((ALO,volume 25))

  • 199 Accesses

Abstract

Although optinformatics in evolutionary learning and optimization has made remarkable progress in recent years, there are still a number of potential research directions of optinformatics that we believe will be beneficial to the field of evolutionary computation, which are remained to be explored. In this chapter, the possible research directions of optinformatics in evolutionary learning and optimization are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Y. Sun, B. Xue, M. Zhang, G.G. Yen, Evolving deep convolutional neural networks for image classification. IEEE Trans. Evol. Comput. 24(2), 394–407 (2020)

    Article  Google Scholar 

  2. D. Yu, L. Deng, Automatic Speech Recognition: A Deep Learning Approach (Springer Publishing Company, Incorporated, 2014)

    Google Scholar 

  3. A. Karatzoglou, B. Hidasi, Deep learning for recommender systems, in Proceedings of the Eleventh ACM Conference on Recommender Systems, pp. 396–C397 (2017)

    Google Scholar 

  4. Q. Zhang, L.T. Yang, Z. Chen, P. Li, A survey on deep learning for big data. Inf. Fus. 42, 146–157 (2018)

    Article  Google Scholar 

  5. H.B. Ammar, E. Eaton, P. Ruvolo, M.E. Taylor, Unsupervised cross-domain transfer in policy gradient reinforcement learning via manifold alignment, in Proceedings of AAAI (2015)

    Google Scholar 

  6. Y. Liu, P. Stone, Value-function-based transfer for reinforcement learning using structure mapping, in Proceedings of the National Conference on Artificial Intelligence, vol. 21, p. 415 (2006)

    Google Scholar 

  7. M.E. Taylor, P. Stone, Cross-domain transfer for reinforcement learning, in Proceedings of the 24th International Conference on Machine Learning (ACM, 2007), pp. 879–886

    Google Scholar 

  8. L. Feng, Y. Ong, M. Lim, I.W. Tsang, Memetic search with interdomain learning: a realization between CVRP and CARP. IEEE Trans. Evol. Comput. 19(5), 644–658 (2015)

    Article  Google Scholar 

  9. J. Pan, C. Lauterbach, D. Manocha, g-planner: Real-time motion planning and global navigation using GPUS (2010)

    Google Scholar 

  10. I. Coelho, V. Coelho, E. Luz, L. Ochi, F. Guimar\(\bar{a}\)es, E. Rios, A gpu deep learning meta-heuristic based model for time series forecasting. Appl. Energy 201(C), 412–418, 01 (2017)

    Google Scholar 

  11. Y. Zhou, J. Zeng, Massively parallel a* search on a GPU, in Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI Press, 2015), pp. 1248C–1254

    Google Scholar 

  12. P. Pospichal, J. Jaros, J. Schwarz, Parallel genetic algorithm on the cuda architecture, in Proceedings of the 2010 International Conference on Applications of Evolutionary Computation—Volume Part I, pp. 442C–451 (2010)

    Google Scholar 

  13. J. Jaros, Multi-GPU Island-based genetic algorithm for solving the knapsack problem, in 2012 IEEE Congress on Evolutionary Computation, pp. 1–8 (2012)

    Google Scholar 

  14. S. Gupta, G. Tan, A scalable parallel implementation of evolutionary algorithms for multi-objective optimization on GPUS, in 2015 IEEE Congress on Evolutionary Computation (CEC), pp. 1567–1574 (2015)

    Google Scholar 

  15. J. Baxter, A model of inductive bias learning. J. Artif. Intell. Res. 12(1), 149–C198 (2000)

    Article  MathSciNet  Google Scholar 

  16. S. Ben-David, R.S. Borbely, A notion of task relatedness yielding provable multiple-task learning guarantees. Mach. Learn. 73(3), 273C–287 (2008)

    Google Scholar 

  17. L. Zhou, L. Feng, J. Zhong, Z. Zhu, B. Da, and Z. Wu, A study of similarity measure between tasks for multifactorial evolutionary algorithm, in Proceedings of the Genetic and Evolutionary Computation Conference Companion, pp. 229C–230 (2018)

    Google Scholar 

  18. A. Gupta, Y.S. Ong, B. Da, L. Feng, S.D. Handoko, Landscape synergy in evolutionary multitasking, in 2016 IEEE Congress on Evolutionary Computation (CEC), pp. 3076–3083 (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Feng .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Feng, L., Hou, Y., Zhu, Z. (2021). Potential Research Directions. In: Optinformatics in Evolutionary Learning and Optimization. Adaptation, Learning, and Optimization, vol 25. Springer, Cham. https://doi.org/10.1007/978-3-030-70920-4_5

Download citation

Publish with us

Policies and ethics