Learning Agents in an Artificial Power Exchange: Tacit Collusion, Market Power and Efficiency of Two Double-auction Mechanisms
- 147 Downloads
This paper investigates the relative efficiency of two double-auction mechanisms for power exchanges, using agent-based modeling. Two standard pricing rules are considered and compared (i.e., “discriminatory” and “uniform”) and computational experiments, characterized by different inelastic demand level, explore oligopolistic competitions on both quantity and price between learning sellers/producers. Two reinforcement learning algorithms are considered as well—“Marimon and McGrattan” and “Q-learning”—in an attempt to simulate different behavioral types. In particular, greedy sellers (optimizing their instantaneous rewards on a tick-by-tick basis) and inter-temporal optimizing sellers are simulated. Results are interpreted relative to game-theoretical solutions and performance metrics. Nash equilibria in pure strategies and sellers’ joint profit maximization are employed to analyze the convergence behavior of the learning algorithms. Furthermore, the difference between payments to suppliers and total generation costs are estimated so as to measure the degree of market inefficiency. Results point out that collusive behaviors are penalized by the discriminatory auction mechanism in low demand scenarios, whereas in a high demand scenario the difference appears to be negligible.
KeywordsAgent-based simulation Power exchange Market power Reinforcement learning
Unable to display preview. Download preview PDF.
- Commission, U. F. E. R. (2003a). Notice of white paper. Technical report, US Federal Energy Regulatory Commission.Google Scholar
- Commission, U. F. E. R. (2003b). Report to congress on competiton in the wholesale and retail markets for electric energy. Technical report, US Federal Energy Regulatory Commission.Google Scholar
- Holmberg, P. (2005). Modelling bidding behaviour in electricity auctions: Supply function equilibria with uncertainty demand and capacity constraints. PhD thesis, UPPSALA University.Google Scholar
- Hu, J., & Wellman, M. P. (1998). Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of 15th International Conference on Machine Learning, pp. 242–250. Morgan Kaufmann, San Francisco, CA.Google Scholar
- Joskow P. (2006) Markets for power in the united states: An interim assessment. Energy Journal 27(1): 1–36Google Scholar
- Kaelbling L., Littman M., Moore A. (1996) Reinforcement learning: A survey. Journal of Artificial Intelligence Research 4: 237–285Google Scholar
- Kahn, A., Cramton, P., Porter, R., & Tabors, R. (2001). Uniform pricing or pay-as-bid pricing: A dilemma for california and beyond. The Electricity Journal, 70–79.Google Scholar
- Marimon, R., & McGrattan, E. (1995). On adaptive learning in strategic games. In A. Kirman & M. Salmon (Eds.), Learning and rationality in economics, (pp. 63–101). Blackwell.Google Scholar
- Puterman, M. (1994). Markov decision processes: Discrete stochastic dynamic programming. Wiley.Google Scholar
- Tesfatsion, L. (2006). Ace research area: Restructured electricity markets. Website available at http://www.econ.iastate.edu/tesfatsi/ace.htm, hosted by the Economics Department, Iowa State University.
- Tesfatsion, L., & Judd, K. (2006). Handbook of computational economics: Agent-based computational economics, Vol. 2 of Handbook in economics series. North Holland.Google Scholar
- Watkins C., Dayan P. (1992) Q-learning. Machine Learning 8(3–4): 279–292Google Scholar