Decentralized Control of DR Using a Multi-agent Method
Demand response (DR) is one of the most cost-effective elements of residential and small industrial building for the purpose of reducing the cost of energy. Today with broadening of the smart grid, electricity market and especially smart home, using DR can reduce cost and even make profits for consumers. On the other hand, utilizing centralized controls and have bidirectional communications Bi-directional communication between DR aggregators and consumers make many problems such as scalability and privacy violation. In this chapter, we propose a multi-agent method based on a Q-learning algorithm Q-learning algorithm for decentralized control of DR. Q-learning is a model-free reinforcement learning Reinforcement learning technique and a simple way for agents to learn how to act optimally in controlled Markovian domains. With this method, each consumer adapts its bidding and buying strategy over time according to the market outcomes. We consider energy supply for consumers such as small-scale renewable energy generators. We compare the result of the proposed method with a centralized aggregator-based approach that shows the effectiveness of the proposed decentralized DR market Decentralized DR market.
KeywordsDemand response Multi-agents Q-learning algorithm
The work of Saber Talari, Miadreza Shafie-khah and João P.S. Catalão was supported by FEDER funds through COMPETE 2020 and by Portuguese funds through FCT, under Projects SAICT-PAC/0004/2015—POCI-01-0145-FEDER-016434, POCI-01-0145-FEDER-006961, UID/EEA/50014/2013, UID/CEC/50021/2013, and UID/EMS/00151/2013. Also, the research leading to these results has received funding from the EU Seventh Framework Programme FP7/2007-2013 under grant agreement no. 309048.
Amin Shokri Gazafroudi and Juan Manuel Corchado acknowledge the support by the European Commission H2020 MSCA-RISE-2014: Marie Sklodowska-Curie project DREAM-GO Enabling Demand Response for short and real-time Efficient And Market Based Smart Grid Operation—An intelligent and real-time simulation approach ref. 641794. Moreover, Amin Shokri Gazafroudi acknowledge the support by the Ministry of Education of the Junta de Castilla y León and the European Social Fund through a grant from predoctoral recruitment of research personnel associated with the research project “Arquitectura multiagente para la gestión eficaz de redes de energía a través del uso de técnicas de intelligencia artificial” of the University of Salamanca.
- 1.Assessment of demand response and advanced metering. Washington, DC, USA, Tech. Rep., Dec. 2012Google Scholar
- 3.H. Aalami,M.P. Moghadam, G R. Yousefi, Optimum time of use program proposal for Iranian power systems, in Proceedings International Conference on Electrical Power Energy Conversion Systems, November 2009Google Scholar
- 5.J. Joo,S. Ahn, Y. Yoon,J. Choi, Option valuation applied to implementing demand response via critical peak pricing, in Proceedings IEEE Power Energy Society General Meeting, Jun 2007Google Scholar
- 9.F. Wang et al., The values of market-based demand response on improving power system reliability under extreme circumstances. Appl Energy 193 220–231 (2017)Google Scholar
- 10.Q. Chen et al., Dynamic Price vector formation model based automatic demand response strategy for PV-assisted EV charging station. IEEE Trans. Smart Grid (2017)Google Scholar
- 11.F. Kamyab et al., Demand response program in smart grid using supply function bidding mechanism. IEEE Trans. Smart Grid 7(3), 1277–1284 (2016)Google Scholar
- 12.M.G. Vayá, L B. Roselló, G. Andersson, Optimal bidding of plug-in electric vehicles in a market-based control setup, in Power Systems Computation Conference (2014)Google Scholar
- 13.J. Mohammadi, G. Hug, S. Kar, Agent-based distributed security constrained optimal power flow, in IEEE Transactions on Smart Grid (2016)Google Scholar
- 14.S. Bahrami, M.H. Amini, A decentralized framework for real-time energy trading in distribution networks with load and generation uncertainty (2017), arXiv:1705.02575
- 15.M.H. Amini, B. Nabi, M.-R. Haghifam, Load management using multi-agent systems in smart distribution network, in IEEE Power and Energy Society General Meeting (PES) (IEEE, 2013)Google Scholar
- 16.M.G. Vayá, Roselló, G. Andersson, Centralized and decentralized approaches to smart charging of plug-in vehicles, in IEEE Power and Energy Society General Meeting (2012)Google Scholar
- 19.A. Hoke, A. Brissette, D. Maksimovic, A. Pratt, K. Smith, Electric vehicle charge optimization including effects of lithium-ion battery degradation, in IEEE vehicle power and propulsion conference (2011) Google Scholar
- 20.J.K. Kok, M.J.J. Scheepers, I.G. Kamphuis, Intelligence in electricity networks for embedding renewables and distributed generations, Intelligent Infrastructures, Intelligent Systems, Control and Automation: Science and Engineering, vol. 42 (Springer, Netherlands, 2010), pp. 179–209zbMATHGoogle Scholar
- 24.L.P. Kaelbling, M.L. Littman, A.W. Moore, Reinforcement learning: a survey. J. Artif. Int. Res. 4, 237–285 (1996)Google Scholar
- 25.C.J.C.H. Watkins, Learning from delayed rewards. Ph.D. thesis, King’s College, Cambridge, 1989Google Scholar
- 26.T. Krause, et al., A comparison of Nash equilibria analysis and agent-based modelling for power markets. Int. J. Electr. Power Energy Syst. 28(9), 599–607 (2006)Google Scholar