Skip to main content

Optinformatics Across Heterogeneous Problem Domains and Solvers

  • Chapter
  • First Online:
Optinformatics in Evolutionary Learning and Optimization

Part of the book series: Adaptation, Learning, and Optimization ((ALO,volume 25))

  • 233 Accesses

Abstract

Besides the illustration of optinformatic algorithm design in evolutionary learning and optimization within a single problem domain, this chapter further introduce the specific algorithm development of optinformatics across heterogeneous problems or solvers. In particular, based on the paradigm of evolutionary search plus transfer learning in Sect. 3.2, the first optinformatic algorithm considers the knowledge transfer for enhanced vehicle or arc routing performance across problems domain. It intends to transfer the knowledge from vehicle routing to speed up the optimization of arc routing, and vice versa. The vehicle and arc routing problems used in Sect. 3.2 is again considered in this work to evaluate the performance of the optinformatic algorithm. Next, the second optinformatic method introduce the evolutionary knowledge learning and transfer across reinforcement learning agents in multi-agent system. Two types of neural network, i.e., feedforward and adaptive resonance theory (ART) neural network, are employed as the reinforcement learning agents, and the application of mine navigation and game of unreal tournament 2004 are used as the learning task to investigate the performance of the optinformatic method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that this is in contrast to domain-specific human crafted EAs in which domain knowledge is only captured and incorporated once as part of the algorithm design and development process.

  2. 2.

    A memetic automaton is defined as an self-contained software or agent which autonomously learns to enhance adaptivity and capability using memes as the building blocks of information during knowledge evolving.

  3. 3.

    A meme is regarded as the fundamental building units of cultural information, hold in an individual’s mind, which is capable of being transmitted to others [15].

  4. 4.

    A meme is defined as the basic unit of cultural information in [15] stored in an individual’s mind universe.

References

  1. Y.C. Jin, Knowledge Incorporation in Evolutionary Computation. Studies in Fuzziness and Soft Computing (Springer, 2010)

    Google Scholar 

  2. Y.S. Ong, N. Krasnogor, H. Ishibuchi, Special issue on memetic algorithm. IEEE Trans. Syst. Man Cybern.—Part B 37(1), 2–5 (2007)

    Article  Google Scholar 

  3. Y.S. Ong, M.H. Lim, F. Neri, H. Ishibuchi, Special issue on emerging trends in soft computing: memetic algorithms. Soft Comput.-A Fusion Found. Methodol. Appl. 13(8–9), 1–2 (2009)

    Google Scholar 

  4. M.H. Lim, S. Gustafson, N. Krasnogor, Y.S. Ong, Editorial to the first issue, memetic computing. Soft Comput.-A Fusion Found. Methodol. Appl. 1(1), 1–2 (2009)

    Google Scholar 

  5. J.E. Smith, Co-evolving memetic algorithms: a review and progress report. IEEE Trans. Syst. Man Cybern.—Part B 37(1), 6–17 (2007)

    Article  Google Scholar 

  6. I. Paenke, Y. Jin, J. Branke, Balancing population-and individual-level adaptation in changing environments. Adapt. Behav. 17(2), 153–174 (2009)

    Article  Google Scholar 

  7. G. Gutin, D. Karapetyan, A selection of useful theoretical tools for the design and analysis of optimization heuristics. Memet. Comput. 1(1), 25–34 (2009)

    Article  Google Scholar 

  8. J. Tang, M.H. Lim, Y.S. Ong, Diversity-adaptive parallel memetic algorithm for solving large scale combinatorial optimization problems. Soft Comput. J. 11(9), 873–888 (2007)

    Article  Google Scholar 

  9. Z. Zhu, Y.S. Ong, M. Dash, Wrapper-filter feature selection algorithm using a memetic framework. IEEE Trans. Syst. Man Cybern.—Part B 37(1), 70–76 (2007)

    Article  Google Scholar 

  10. S. Hasan, R. Sarker, D. Essam, D. Cornforth, Memetic algorithms for solving job-shop scheduling problems. Memet. Comput. 1(1), 69–83 (2009)

    Article  Google Scholar 

  11. M. Tang, X. Yao, A memetic algorithm for VLSI floor planning. IEEE Trans. Syst. Man Cybern.—Part B 37(1), 62–69 (2007)

    Article  MathSciNet  Google Scholar 

  12. P. Cunningham, B. Smyth, Case-based reasoning in scheduling: reusing solution components. Int. J. Prod. Res. 35(4), 2947–2961 (1997)

    Article  MATH  Google Scholar 

  13. S.J. Louis, J. McDonnell, Learning with case-injected genetic algorithms. IEEE Trans. Evol. Comput. 8(4), 316–328 (2004)

    Article  Google Scholar 

  14. L. Feng, Y.S. Ong, I.W. Tsang, A.H. Tan, An evolutionary search paradigm that learns with past experiences, in IEEE World Congress on Computational Intelligence, Congress on Evolutionary Computation (2012)

    Google Scholar 

  15. R. Dawkins, The Selfish Gene (Oxford University Press, Oxford, 1976)

    Google Scholar 

  16. X.S. Chen, Y.S. Ong, Q.H. Nguyen, A conceptual modeling of meme complexes in stochastic search. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 42(5), 612–625 (2011)

    Article  Google Scholar 

  17. Y. Mei, K. Tang, X. Yao, Improved memetic algorithm for capacitated arc routing problem. IEEE Congress on Evolutionary Computation, pp. 1699–1706 (2009)

    Google Scholar 

  18. E.W. Dijkstra, A note on two problems in connection with graphs. Numerische Mathematik 1, 269–271 (1959)

    Article  MathSciNet  MATH  Google Scholar 

  19. I. Borg, P.J.F. Groenen, Modern Multidimensional Scaling: Theory and Applications (Springer, 2005)

    Google Scholar 

  20. C. Wang, S. Mahadevan, Manifold alignment without correspondence, in Proceedings of the 21st International Joint Conference on Artificial Intelligence, pp. 1273–1278 (2009)

    Google Scholar 

  21. F. Neri, C. Cotta, P. Moscato, Handbook of Memetic Algorithms. Studies in Computational Intelligence (Springer, 2011)

    Google Scholar 

  22. F. Neri, E. Mininno, Memetic compact differential evolution for cartesian robot control. IEEE Comput. Intell. Mag. 5(2), 54–65 (2010)

    Article  Google Scholar 

  23. C.K. Ting, C.C. Liao, A memetic algorithm for extending wireless sensor network lifetime. Inf. Sci. 180(24), 4818–4833 (2010)

    Article  Google Scholar 

  24. Y.S. Ong, M.H. Lim, X.S. Chen, Research frontier:—past, present & future. IEEE Comput. Intell. Mag. 5(2), 24–36 (2010)

    Article  Google Scholar 

  25. X. Chen, Y. Ong, M. Lim, K.C. Tan, A multi-facet survey on memetic computation. IEEE Trans. Evol. Comput. 15(5), 591–607 (2011)

    Article  Google Scholar 

  26. S.J. Pan, Q. Yang, A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)

    Google Scholar 

  27. A. Gretton, O. Bousquet, A. Smola, B. Sch\(\ddot{o}\)lkopf, Measuring statistical dependence with hilbert-schmidt norms. Proceedings of Algorithmic Learning Theory, pp. 63–77 (2005)

    Google Scholar 

  28. S.C.H. Hoi, J. Zhuang, I. Tsang, A family of simple non-parametric kernel learning algorithms. J. Mach. Learn. Res. (JMLR) 12, 1313–1347 (2011)

    Google Scholar 

  29. K.M. Borgwardt, A. Gretton, M.J. Rasch, H.P. Kriegel, B. Sch\(\ddot{o}\)lkopf, A.J. Smola, Integrating structured biological data by kernel maximum mean discrepancy. Int. Conf. Intell. Syst. Mol. Biol. 49–57 (2006)

    Google Scholar 

  30. L. Song, A. Smola, A. Gretton, K.M. Borgwardt. A dependence maximization view of clustering, in Proceedings of the 24th International Conference on Machine Learning, pp. 815–822 (2007)

    Google Scholar 

  31. B.E. Gillett, L.R. Miller, A heuristic algorithm for the vehicle-dispatch problem. Oper. Res. 22(2), 340–349 (1974)

    Article  MATH  Google Scholar 

  32. G. Clarke, J. Wright, Scheduling of vehicles from a central depot to a number of delivery points. Oper. Res. 12(4), 568–581 (1964)

    Article  Google Scholar 

  33. B. Golden, R. Wong, Capacitated arc routing problems. Networks 11(3), 305–315 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  34. B.L. Golden, J.S. DeArmon, E.K. Baker, Computational experiments with algorithms for a class of routing problems. Comput. Oper. Res. 10(1), 47–59 (1983)

    Article  MathSciNet  Google Scholar 

  35. G. Ulusoy, The fleet size and mix problem for capacitated arc routing. Eur. J. Oper. Res. 22(3), 329–337 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  36. T. Bäck, U. Hammel, H.-P. Schwefel, Evolutionary computation: comments on the history and current state. IEEE Trans. Evol. Comput. 1(1), 3–17 (1997)

    Article  Google Scholar 

  37. J.W. Eerkens, C.P. Lipo, Cultural transmission, copying errors, and the generation of variation in material culture and the archaeological record. J. Anthr. Archaeol. 24(4), 316–334 (2005)

    Article  Google Scholar 

  38. M.A. Runco, S. Pritzker, Encyclopedia of Creativity (Academic Press, 1999)

    Google Scholar 

  39. T.L. Huston, G. Levinger, Interpersonal attraction and relationships. Ann. Rev. Psychol. 29(1), 115–156 (1978)

    Article  Google Scholar 

  40. F. Bousquet, C. Le Page, Multi-agent simulations and ecosystem management: a review. Ecol. Model. 176(3), 313–332 (2004)

    Article  Google Scholar 

  41. P. Stone, M. Veloso, Multiagent systems: a survey from a machine learning perspective. Auton. Robot. 8(3), 345–383 (2000)

    Article  Google Scholar 

  42. B. Burmeister, A. Haddadi, G. Matylis, Application of multi-agent systems in traffic and transportation, in IEE Proceedings-Software Engineering [see also Software, IEE Proceedings], vol. 144 (IET, 1997), pp. 51–60

    Google Scholar 

  43. D.L. Hancock, G.B. Lamont, Multi agent systems on military networks, in 2011 IEEE Symposium on Computational Intelligence in Cyber Security (CICS) (IEEE, 2011), pp. 100–107

    Google Scholar 

  44. M. Pipattanasomporn, H. Feroze, S. Rahman, Multi-agent systems in a distributed smart grid: design and implementation, in Power Systems Conference and Exposition, 2009. PSCE’09. IEEE/PES (IEEE, 2009), pp. 1–8

    Google Scholar 

  45. R.S. Sutton, Learning to predict by the methods of temporal differences. Mach. Learn. 3(1), 9–44 (1988)

    Google Scholar 

  46. C.J. Watkins, P. Dayan, Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)

    MATH  Google Scholar 

  47. G.A. Rummery, M. Niranjan, On-line Q-learning using connectionist systems. Department of Engineering (University of Cambridge, 1994)

    Google Scholar 

  48. M.P. Deisenroth, G. Neumann, J. Peters et al., A survey on policy search for robotics. Found. Trends Robot. 2(1–2), 1–142 (2013)

    Google Scholar 

  49. S.P. Singh, R.S. Sutton, Reinforcement learning with replacing eligibility traces. Mach. Learn. 22(1–3), 123–158 (1996)

    MATH  Google Scholar 

  50. A. Lazaric, M. Restelli, A. Bonarini, Reinforcement learning in continuous action spaces through sequential monte carlo methods, in Advances in Neural Information Processing Systems, pp. 833–840 (2007)

    Google Scholar 

  51. K. Ueda, I. Hatono, N. Fujii, J. Vaario, Reinforcement learning approaches to biological manufacturing systems. CIRP Ann.-Manuf. Technol. 49(1), 343–346 (2000)

    Article  Google Scholar 

  52. I. Giannoccaro, P. Pontrandolfo, Inventory management in supply chains: a reinforcement learning approach. Int. J. Prod. Econ. 78(2), 153–161 (2002)

    Article  Google Scholar 

  53. A.E. Gaweda, M.K. Muezzinoglu, A.A. Jacobs, G.R. Aronoff, M.E. Brier, Model predictive control with reinforcement learning for drug delivery in renal anemia management, in Engineering in Medicine and Biology Society, 2006. EMBS’06. 28th Annual International Conference of the IEEE (IEEE, 2006), pp. 5177–5180

    Google Scholar 

  54. T.G. Dietterich, Hierarchical reinforcement learning with the maxq value function decomposition. J. Artif. Intell. Res. (JAIR) 13, 227–303 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  55. R.S. Sutton, D. Precup, S. Singh, Between MDPs and semi-MDPs: a framework for temporal abstraction in reinforcement learning. Artif. Intell. 112(1–2), 181–211 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  56. M.E. Taylor, P. Stone, Transfer learning for reinforcement learning domains: a survey. J. Mach. Learn. Res. 10, 1633–1685 (2009)

    MathSciNet  MATH  Google Scholar 

  57. R.S. Woodworth, E.L. Thorndike, The influence of improvement in one mental function upon the efficiency of other functions.(i). Psychol. Rev. 8(3), 247 (1901)

    Google Scholar 

  58. B.F. Skinner, Science and Human Behavior (Simon and Schuster, 1953)

    Google Scholar 

  59. G.P.C. Fung, J.X. Yu, H. Lu, P.S. Yu, Text classification without negative examples revisit. IEEE Trans. Knowl. Data Eng. 18(1), 6–20 (2006)

    Article  Google Scholar 

  60. B. Bakker, T. Heskes, Task clustering and gating for bayesian multitask learning. J. Mach. Learn. Res. 4, 83–99 (2003)

    MATH  Google Scholar 

  61. C.H. Lampert, H. Nickisch, S. Harmeling, Learning to detect unseen object classes by between-class attribute transfer, in IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009 (IEEE, 2009), pp. 951–958

    Google Scholar 

  62. D. Wang, T.F. Zheng, Transfer learning for speech and language processing, in 2015 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA) (IEEE, 2015), pp. 1225–1237

    Google Scholar 

  63. M.E. Taylor, N.K. Jong, P. Stone, Transferring instances for model-based reinforcement learning, in Machine Learning and Knowledge Discovery in Databases (Springer, 2008), pp. 488–505

    Google Scholar 

  64. M.E. Taylor, P. Stone, Representation transfer for reinforcement learning, in AAAI 2007 Fall Symposium on Computational Approaches to Representation Change during Learning and Development, pp. 1–8 (2007)

    Google Scholar 

  65. B. Banerjee, P. Stone, General game learning using knowledge transfer, in IJCAI, pp. 672–677 (2007)

    Google Scholar 

  66. T.J. Walsh, L. Li, M.L. Littman, Transferring state abstractions between MDPs, in ICML Workshop on Structural Knowledge Transfer for Machine Learning (2006)

    Google Scholar 

  67. M.E. Taylor, P. Stone, Cross-domain transfer for reinforcement learning, in Proceedings of the 24th International Conference on Machine Learning (ACM, 2007), pp. 879–886

    Google Scholar 

  68. A. Taylor, I. Dusparic, E. Galván-López, S. Clarke, V. Cahill, Transfer learning in multi-agent systems through parallel transfer, in Theoretically Grounded Transfer Learning at the 30th International Conference on Machine Learning (ICML) (Omnipress, 2013)

    Google Scholar 

  69. P. Vrancx, Y. De H, A. Nowé, Transfer learning for multi-agent coordination, in ICAART (2), pp. 263–272 (2011)

    Google Scholar 

  70. M.E. Taylor, S. Whiteson, P. Stone, Transfer via inter-task mappings in policy search reinforcement learning, in Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent systems (ACM, 2007), p. 37

    Google Scholar 

  71. G. Boutsioukis, I. Partalas, I. Vlahavas, Transfer learning in multi-agent reinforcement learning domains, in Recent Advances in Reinforcement Learning (Springer, 2012), pp. 249–260

    Google Scholar 

  72. E. Oliveira, L. Nunes, Learning by exchanging advice, in Design of Intelligent Multi-agent Systems, Chapter 9, ed. by R. Khosla, N. Ichalkaranje, L. Jain (Spring, New York, NY, USA, 2005)

    Google Scholar 

  73. L. Feng, Y. S. Ong, A.H. Tan, X. Chen, Towards human-like social multi-agents with memetic automaton, in 2011 IEEE Congress on Evolutionary Computation (CEC) (IEEE, 2011), pp. 1092–1099

    Google Scholar 

  74. Ah-Hwee Tan, Lu Ning, Dan Xiao, Integrating temporal difference methods and self-organizing neural networks for reinforcement learning with delayed evaluative feedback. IEEE Trans. Neural Netw. 19(2), 230–244 (2008)

    Article  Google Scholar 

  75. H. Meisner, Interview with Richard S. Sutton. Kïnstliche Intelligenz, 3(1), 41–43 (2009)

    Google Scholar 

  76. X. Chen, Y. Zeng, Y.S. Ong, C.S. Ho, Y. Xiang, A study on like-attracts-like versus elitist selection criterion for human-like social behavior of memetic mulitagent systems, in 2013 IEEE Congress on, Evolutionary Computation (CEC) (IEEE, 2013), pp. 1635–1642

    Google Scholar 

  77. D. Gordon, D. Subramanian, A cognitive model of learning to navigate, in Proceedings of the 19th Conference of the Cognitive Science Society, vol. 25, p. 271 (1997)

    Google Scholar 

  78. D. Wang, A. Tan, Creating autonomous adaptive agents in a real-time first-person shooter computer game. IEEE Transactions on Computational Intelligence and AI in Games (2014)

    Google Scholar 

  79. J. Gemrot, R. Kadlec, M. Bída, O. Burkert, R. Píbil, J. Havlíček, L. Zemčák, J. Šimlovič, R. Vansa, M. Štolba, et al., Pogamut 3 can assist developers in building ai (not only) for their videogame agents, in Agents for Games and Simulations (Springer, 2009), pp. 1–15

    Google Scholar 

  80. R. Adobbati, A.N. Marshall, A. Scholer, S. Tejada, G.A. Kaminka, S. Schaffer, C. Sollitto, Gamebots: a 3d virtual world test-bed for multi-agent research, in Proceedings of the Second International Workshop on Infrastructure for Agents, MAS, and Scalable MAS, Montreal, Canada, vol. 5 (2001)

    Google Scholar 

  81. J. Lindfors, M. Fleury, JMX: Managing J2EE with Java Management Extensions (Sams Publishing, 2002)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liang Feng .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Feng, L., Hou, Y., Zhu, Z. (2021). Optinformatics Across Heterogeneous Problem Domains and Solvers. In: Optinformatics in Evolutionary Learning and Optimization. Adaptation, Learning, and Optimization, vol 25. Springer, Cham. https://doi.org/10.1007/978-3-030-70920-4_4

Download citation

Publish with us

Policies and ethics