Skip to main content

Expertise Based Cooperative Reinforcement Learning Methods (ECRLM) for Dynamic Decision Making in Retail Shop Application

  • Conference paper
  • First Online:
Information and Communication Technology for Intelligent Systems (ICTIS 2017) - Volume 2 ( ICTIS 2017)

Abstract

A novel approach for dynamic decision making in retail application by expertise based cooperative reinforcement learning methods (ECRLM) is proposed in this paper. Different cooperation schemes for cooperative reinforcement learning i.e. EGroup scheme, EDynamic scheme, EGoal-oriented scheme proposed here. Implementation outcome includes demonstration of recommended cooperation schemes that are competent enough to speed up the collection of agents that achieves excellent action policies. This approach is developed for a three retailer shops in the retail market. Retailers be able to help with each other and can obtain profit from cooperation knowledge through learning their own strategies that exactly stand for their aims and benefit. The retailers are the knowledge agents in the hypothesis and employ reinforcement learning to learn cooperatively in situation. Assuming significant hypothesis on the dealer’s stock policy, refill period, and arrival process of the consumers, the approach is modeled as Markov decision process model thus making it possible to apply learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Vidhate, D.A., Kulkarni, P.: New approach for advanced cooperative learning algorithms using RL methods (ACLA). In: VisionNet 2016 Proceedings of the Third International Symposium on Computer Vision and the Internet, pp. 12–20. ACM DL (2016)

    Google Scholar 

  2. Vidhate, D.A., Kulkarni, P.: Enhancement in decision making with improved performance by multiagent learning algorithms. IOSR J. Comput. Eng. 1(18), 18–25 (2016)

    Google Scholar 

  3. Raju Chinthalapati, V.L., Yadati, N., Karumanchi, R.: Learning dynamic prices in multi-seller electronic retail markets with price sensitive customers, stochastic demands, and inventory replenishments. IEEE Trans. Syst. Man Cybern. C Appl. Rev. 36(1), 92–106 (2008)

    Google Scholar 

  4. Choi, Y.-C., Ahn H.-S.: A survey on multi-agent reinforcement learning: coordination problems. In: IEEE/ASME International Conference on Mechatronics and Embedded Systems, pp. 81–86 (2010)

    Google Scholar 

  5. Abbasi, Z., Abbasi, M.A.: Reinforcement distribution in a team of cooperative Q-learning agent. In: Proceedings of the 9th ACIS International Conference on Software Engineering, Artificial Intelligence, and Parallel/Distributed Computing, pp. 154–160. IEEE (2008)

    Google Scholar 

  6. Vidhate, D.A., Kulkarni, P.: Multilevel relationship algorithm for association rule mining used for cooperative learning. Int. J. Comput. Appl. (IJCA) 86, 20–27 (2014)

    Google Scholar 

  7. Gao, L.-M., Zeng, J., Wu, J., Li, M.: Cooperative reinforcement learning algorithm to distributed power system based on multi-agent. In: 3rd International Conference on Power Electronics Systems and Applications Digital Reference: K210509035 (2009)

    Google Scholar 

  8. Al-Khatib, A.M.: Cooperative machine learning method. World Comput. Sci. Inf. Technol. J. (WCSIT) 1, 380–383 (2011). ISSN 2221-0741

    Google Scholar 

  9. Vidhate, D.A., Kulkarni, P.: Improvement in association rule mining by multilevel relationship algorithm. Int. J. Res. Advent Technol. (IJRAT) 2, 366–373 (2014)

    Google Scholar 

  10. Panait, L., Luke, S.: Cooperative multi-agent learning: the state of the art. J. Auton. Agents Multi-Agents 11, 387–434 (2005)

    Article  Google Scholar 

  11. Tao, J.-Y., Li, D.-S.: Cooperative strategy learning in multi-agent environment with continuous state space. In: IEEE International Conference on Machine Learning (2006)

    Google Scholar 

  12. Berenji, H.R., Vengerov, D.: Learning, cooperation, and coordination in multi-agent systems. In: Intelligent Inference Systems Corporation, Technical report, October 2000

    Google Scholar 

  13. Vidhate, D.A., Kulkarni, P.: Innovative approach towards cooperation models for multi-agent reinforcement learning (CMMARL). In: International Conference on Smart Trends for Information Technology and Computer Communications, pp. 468–478. Springer, Singapore (2016)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Deepak A. Vidhate .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Cite this paper

Vidhate, D.A., Kulkarni, P. (2018). Expertise Based Cooperative Reinforcement Learning Methods (ECRLM) for Dynamic Decision Making in Retail Shop Application. In: Satapathy, S., Joshi, A. (eds) Information and Communication Technology for Intelligent Systems (ICTIS 2017) - Volume 2. ICTIS 2017. Smart Innovation, Systems and Technologies, vol 84. Springer, Cham. https://doi.org/10.1007/978-3-319-63645-0_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-63645-0_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-63644-3

  • Online ISBN: 978-3-319-63645-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics