Skip to main content

Dynamic Power Management for Sensor Node in WSN Using Average Reward MDP

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5682))

Abstract

Reducing energy consumption is one of the key challenges in sensor networks. One technique to reduce energy consumption is dynamic power management. In this paper we model power management problem in a sensor node as an average reward Markov Decision Process and solve it using dynamic programming. We achieve an optimal policy that maximizes long-term average of utility per energy consumption. Simulation results show our approach has the ability of reaching to the same amount of utility as always on policy while consuming less energy than always on policy.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Sinha, A., Chandrakasan, A.: Dynamic Power Management in Wireless Sensor Networks. IEEE Design & Test of Computers 18, 62–74 (2001)

    Article  Google Scholar 

  2. Wang, L., Xiao, Y.: A Survey of Energy-Efficient Scheduling Mechanisms in Sensor Networks. Mobile Networks and Applications 11, 723–740 (2006)

    Article  Google Scholar 

  3. Lin, C., Xiong, N., Park, J.H., Kim, T.-H.: Dynamic power management in new architecture of wireless sensor networks. International Journal of Communication Systems (2008)

    Google Scholar 

  4. Luo, R.C., Tu, L.C., Chen, O.: An Efficient Dynamic Power Management Policy on Sensor Network. In: Proceedings of the 19th International Conference on Advanced Information Networking and Applications (2005)

    Google Scholar 

  5. Shen, Y., Guo, B.: Dynamic Power Management based on Wavelet Neural Network in Wireless Sensor Networks. In: International Conference on Network and Parallel Computing, pp. 431–436 (2007)

    Google Scholar 

  6. Bertsekas, D.: Dynamic Programming and Optimal Control, 2nd edn., vol. 1 and 2. Athena Scientific, Belmont (2000)

    Google Scholar 

  7. Mahadevan, S.: Average Reward Reinforcement Learning: Foundation, Algorithms, and Empirical Results. Machine Learning 22, 159–196 (1996)

    MATH  Google Scholar 

  8. Haviv, M., Puterman, M.L.: An Improved Algorithm For Solving Communicating Average Reward Markov Decision Processes. Annals of Operations Research 28, 229–242 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  9. White, D.: Dynamic programming, markov chains, and the method of successive approximations. Journal of Mathematical Analysis and Applications 6, 373–376 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  10. Chiasserini, C.-F., Rao, R.R.: Improving Energy Saving in Wireless Systems by Using Dynamic Power Management. IEEE Transactions on WIRELESS COMMUNICATIONS 2, 1090–1100 (2003)

    Article  Google Scholar 

  11. Tsitsiklis, J.N.: NP-hardness of checking the unichain condition in average cost MDPs. Oper. Res. Lett. 35, 319–323 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  12. Feinberg, E.A., Yang, F.: On polynomial cases of the unichain classification problem for Markov Decision Processes. Operations Research Letters 36, 527–530 (2008)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kianpisheh, S., Charkari, N.M. (2009). Dynamic Power Management for Sensor Node in WSN Using Average Reward MDP. In: Liu, B., Bestavros, A., Du, DZ., Wang, J. (eds) Wireless Algorithms, Systems, and Applications. WASA 2009. Lecture Notes in Computer Science, vol 5682. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03417-6_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-03417-6_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-03416-9

  • Online ISBN: 978-3-642-03417-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics