LTF Robot: Binocular Robot with Laser-Point Tracking and Focusing Function
A traditional binocular vision system needs matching images captured from its left camera and right camera, which leads to huge computational consumption and matching errors. This paper proposes a novel binocular vision method with laser-point tracking and focusing (LTF) function. A binocular robot with the LTF function is developed, called LTF Robot. The LTF Robot is composed of two cameras, a platform with 3 degrees of freedom, a micro controller, and a computer in which there is an application with the LTF function based on LabVIEW. This robot achieves the LTF function. When the position of the laser point changes, the intersection point of light axes of the two cameras will coincide with the laser point in the environment, and the laser point locates in the center of the images. The laser point is from a laser pointer handling by operators or LED lights mounted on targets. The LTF function is useful for many applications, e.g. guiding the robot easily in human-robot interaction or games, active monitoring and video recording.
KeywordsRobot vision Visual control Binocular system Laser point tracking Gazing
This Research was supported by National Natural Science Foundation of China (No. 51575302), Beijing Natural Science Foundation (No. J170005) and National Key R&D Program of China (No. 2017YFE0113200).
- 1.Yabuta, Y., Mizumoto, H.: Binocular robot vision system with shape recognition. In: SICE-ICASE International Joint Conference, Busan, South Korea, 18–21 October, pp. 5002–5005 (2006)Google Scholar
- 2.Fan, X., Wang, X., Xiao, Y.: A shape-based stereo matching algorithm for binocular vision. In: IEEE Transaction on Security, Pattern Analysis, and Cybernetic (SPAC), Wuhan, China, 18–19 October, pp. 70–74 (2014)Google Scholar
- 3.Fan, X., Wang, X., Xiao, Y.: An automatic robot unstacking system based on binocular stereo vision. In: IEEE Conference on Security, Pattern Analysis, and Cybernetic (SPAC), Wuhan, China, 18–19 October, pp. 86–89 (2014)Google Scholar
- 4.Huang, G.-S.: 3D coordinate identification of object using binocular vision system for mobile robot. In: IEEE International Automatic Control Conference (CACS), Nantou, Taiwan, pp. 91–96 (2013)Google Scholar
- 5.Shibata, M., Eto, H., Ito, M.: Image-based visual tracking to fast moving target for active binocular robot. In: IEEE 36th Annual Conference on IEEE Industrial Electronics Society (IECON), Glendale, AZ, USA, 7–10 November, pp. 2727–2732 (2010)Google Scholar
- 6.Yu, K., Bo-Wen, L.: Design of a real-time tracking robot based on simplified binocular position model. International Conference on Automatic Control and Artificial Intelligence (ACAI), Xiamen, China, 3–5 March, pp. 811–816 (2012)Google Scholar
- 7.Kwon, S.H., Kim, M.Y.: Head-mounted binocular gaze tracker as a human-robot interfacing device. In: IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Gyeongju, Korea, 26–29 August, pp. 374–375 (2013)Google Scholar
- 8.Wang, C., Chen, L., Zhang, C., et al.: Learning-based action planning for real-time robot telecontrol with binocular vision in enhanced reality environment. In: 2009 IEEE International Conference on Robotics and Biomimetics (ROBIO), Guilin, China, pp. 2041–2046 (2009)Google Scholar
- 9.Zamora-Esquivel, J., Bayro-Corrochano, E.: Kinematics and differential kinematics of binocular robot heads. In: IEEE International Conference on Robotics and Automation (ICRA), Orlando, Florida, USA, 15–19 May, pp. 4130–4135 (2006)Google Scholar
- 10.Coombs, D., Brown, C.: Real-time smooth pursuit tracking for a moving binocular robot. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Champaign, IL, USA, 15–18 June, pp. 23–28 (1992)Google Scholar
- 11.Yang, J., Qian, J.: Surgical navigation robot based on binocular stereovision. In: IEEE International Conference on Mechatronics and Automation, Luoyang, China, 25–28 June, pp. 2378–2382 (2006)Google Scholar
- 12.Li, B., Xie, W.: Target tracking and measuring based on binocular vision. In: IEEE International Conference on Information Science and Technology, Yangzhou, China, 23–25 March, pp. 1393–1396 (2013)Google Scholar