Coordination control of greenhouse environmental factors
- 212 Downloads
Optimal control of greenhouse climate is one of the key techniques in digital agriculture. Greenhouse climate, a nonlinear and uncertain system, consists of several major environmental factors such as temperature, humidity, light intensity, and CO2 concentration. Due to the complex coupled correlations, it is a challenge to achieve coordination control of greenhouse environmental factors. This paper proposes a model-free coordination control approach for greenhouse environmental factors based on Q-learning. Coordination control policy is found through systematic interaction with the dynamic environment to achieve optimal control for greenhouse climate with the control cost constraints. In order to decrease systematic trial-and-error risk and reduce the computational complexity in Q-learning algorithm, case-based reasoning (CBR) is seamlessly incorporated into the Q-learning process. The experimental results demonstrate that this approach is practical, highly effective and efficient.
KeywordsQ-learning case-based reasoning (CBR) greenhouse environmental factors coordination control coupled correlation trial-and-error
Unable to display preview. Download preview PDF.
- Y. X. Li, S. F. Du. Advances of intelligent control algorithm of greenhouse environment in China. Transactions of the Chinese Society of Agricultural Engineering, vol. 20, no. 2, pp. 267–272, 2004. (in Chinese)Google Scholar
- Y. Z. Liu, G. H. Teng, S. R. Liu. The problem of the control system for greenhouse climate. Chinese Agricultural Science Bulletin, vol. 23, no. 10, pp. 154–157, 2007. (in Chinese)Google Scholar
- P. M. Ferela, A. E. Ruano. Choice of RBF model structure for predicting greenhouse inside air temperature. In Proceedings of the 15th Triennial World Congress of the International Federation of Automatic Control, Barcelona, Spain, 2002.Google Scholar
- P. Sandra. Nonlinear model predictive via feedback linearization of a greenhouse. In Proceedings of the 15th Triennial World Congress, Barcelona, Spain, 2002.Google Scholar
- A. G. Barto. Reinforcement learning in the real world. In Proceedings of IEEE International Joint Conference on Neural Networks, vol. 3, pp. 25–29, 2004.Google Scholar
- K. Macek, I. Petrovic, N. Peric. A reinforcement learning approach to obstacle avoidance of mobile robots. In Proceedings of the 7th International Workshop on Advanced Motion Control, IEEE, pp. 462–466, 2002.Google Scholar
- B. C. Kuo. Automatic Control System, 7th ed., New York, USA: Prentice-Hall, 1995.Google Scholar