Abstract
Reducing energy consumption is one of the key challenges in sensor networks. One technique to reduce energy consumption is dynamic power management. In this paper we model power management problem in a sensor node as an average reward Markov Decision Process and solve it using dynamic programming. We achieve an optimal policy that maximizes long-term average of utility per energy consumption. Simulation results show our approach has the ability of reaching to the same amount of utility as always on policy while consuming less energy than always on policy.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Sinha, A., Chandrakasan, A.: Dynamic Power Management in Wireless Sensor Networks. IEEE Design & Test of Computers 18, 62–74 (2001)
Wang, L., Xiao, Y.: A Survey of Energy-Efficient Scheduling Mechanisms in Sensor Networks. Mobile Networks and Applications 11, 723–740 (2006)
Lin, C., Xiong, N., Park, J.H., Kim, T.-H.: Dynamic power management in new architecture of wireless sensor networks. International Journal of Communication Systems (2008)
Luo, R.C., Tu, L.C., Chen, O.: An Efficient Dynamic Power Management Policy on Sensor Network. In: Proceedings of the 19th International Conference on Advanced Information Networking and Applications (2005)
Shen, Y., Guo, B.: Dynamic Power Management based on Wavelet Neural Network in Wireless Sensor Networks. In: International Conference on Network and Parallel Computing, pp. 431–436 (2007)
Bertsekas, D.: Dynamic Programming and Optimal Control, 2nd edn., vol. 1 and 2. Athena Scientific, Belmont (2000)
Mahadevan, S.: Average Reward Reinforcement Learning: Foundation, Algorithms, and Empirical Results. Machine Learning 22, 159–196 (1996)
Haviv, M., Puterman, M.L.: An Improved Algorithm For Solving Communicating Average Reward Markov Decision Processes. Annals of Operations Research 28, 229–242 (1991)
White, D.: Dynamic programming, markov chains, and the method of successive approximations. Journal of Mathematical Analysis and Applications 6, 373–376 (1963)
Chiasserini, C.-F., Rao, R.R.: Improving Energy Saving in Wireless Systems by Using Dynamic Power Management. IEEE Transactions on WIRELESS COMMUNICATIONS 2, 1090–1100 (2003)
Tsitsiklis, J.N.: NP-hardness of checking the unichain condition in average cost MDPs. Oper. Res. Lett. 35, 319–323 (2007)
Feinberg, E.A., Yang, F.: On polynomial cases of the unichain classification problem for Markov Decision Processes. Operations Research Letters 36, 527–530 (2008)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kianpisheh, S., Charkari, N.M. (2009). Dynamic Power Management for Sensor Node in WSN Using Average Reward MDP. In: Liu, B., Bestavros, A., Du, DZ., Wang, J. (eds) Wireless Algorithms, Systems, and Applications. WASA 2009. Lecture Notes in Computer Science, vol 5682. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03417-6_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-03417-6_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-03416-9
Online ISBN: 978-3-642-03417-6
eBook Packages: Computer ScienceComputer Science (R0)