Luminance: A New Visual Feature for Visual Servoing
Abstract
This chapter is dedicated to a new way to achieve robotic tasks by 2D visual servoing. Contrary to most of related works in this domain where geometric visual features are usually used, we directly here consider the luminance of all pixels in the image. We call this new visual servoing scheme photometric visual servoing. The main advantage of this new approach is that it greatly simplifies the image processing required to track geometric visual features all along the camera motion or to match the initial visual features with the desired ones. However, as it is required in classical visual servoing, the computation of the so-called interaction matrix is required. In our case, this matrix links the time variation of the luminance to the camera motions.We will see that this computation is based on a illumination model able to describe complex luminance changes. However, since most of the classical control laws fail when considering the luminance as a visual feature, we turn the visual servoing problem into an optimization one leading to a new control law. Experimental results on positioning tasks validate the feasibility of photometric visual servoing and show its robustness regarding to approximated depths, Lambertian and non Lambertian objects, low textured objects, partial occlusions and even, to some extent, to image content.
Keywords
Cost Function Visual Feature Interaction Matrix Visual Servoing Illumination ModelPreview
Unable to display preview. Download preview PDF.
References
- 1.Abdul Hafez, A., Achar, S., Jawahar, C.: Visual servoing based on gaussian mixture models. In: IEEE Int. Conf. on Robotics and Automation, ICRA 2008, Pasadena, California, pp. 3225–3230 (2008)Google Scholar
- 2.Benhimane, S., Malis, E.: Homography-based 2d visual tracking and servoing. Int. Journal of Robotics Research 26(7), 661–676 (2007)CrossRefGoogle Scholar
- 3.Blinn, J.: Models of light reflection for computer synthesized pictures. In: ACM Conf. on Computer graphics and interactive techniques, SIGGRAPH 1977, San Jose, California, pp. 192–198 (1977), http://doi.acm.org/10.1145/563858.563893
- 4.Chaumette, F., Hutchinson, S.: Visual servoing and visual tracking. In: Siciliano, B., Khatib, O. (eds.) Handbook of Robotics, ch. 24, pp. 563–583. Springer, Heidelberg (2008)CrossRefGoogle Scholar
- 5.Collewet, C., Marchand, E.: Modeling complex luminance variations for target tracking. In: IEEE Int. Conf. on Computer Vision and Pattern Recognition, CVPR 2008, Anchorage, Alaska, pp. 1–7 (2008)Google Scholar
- 6.Collewet, C., Marchand, E.: Photometric visual servoing. Tech. Rep. No. 6631, INRIA (2008)Google Scholar
- 7.Comport, A., Marchand, E., Chaumette, F.: Statistically robust 2D visual servoing. IEEE Trans. on Robotics 22(2), 415–421 (2006), http://dx.doi.org/10.1109/TRO.2006.870666 CrossRefGoogle Scholar
- 8.Crétual, A., Chaumette, F.: Visual servoing based on image motion. Int. Journal of Robotics Research 20(11), 857–877 (2001)CrossRefGoogle Scholar
- 9.Deguchi, K.: A direct interpretation of dynamic images with camera and object motions for vision guided robot control. Int. Journal of Computer Vision 37(1), 7–20 (2000)MATHCrossRefGoogle Scholar
- 10.Espiau, B., Chaumette, F., Rives, P.: A new approach to visual servoing in robotics. IEEE Trans. on Robotics and Automation 8(3), 313–326 (1992)CrossRefGoogle Scholar
- 11.Feddema, J., Lee, C., Mitchell, O.: Automatic selection of image features for visual servoing of a robot manipulator. In: IEEE Int. Conf. on Robotics and Automation, ICRA 1989, Scottsdale, Arizona, vol. 2, pp. 832–837 (1989)Google Scholar
- 12.Hashimoto, K., Kimura, H.: LQ optimal and non-linear approaches to visual servoing. In: Hashimoto, K. (ed.) Visual Servoing, vol. 7, pp. 165–198. World Scientific Series in Robotics and Automated Systems, Singapore (1993)Google Scholar
- 13.Horn, B., Schunck, B.: Determining optical flow. Artificial Intelligence 17(1-3), 185–203 (1981)CrossRefGoogle Scholar
- 14.Janabi-Sharifi, F., Wilson, W.: Automatic selection of image features for visual servoing. IEEE Trans. on Robotics and Automation 13(6), 890–903 (1997)CrossRefGoogle Scholar
- 15.Kallem, V., Dewan, M., Swensen, J., Hager, G., Cowan, N.: Kernel-based visual servoing. In: IEEE/RSJ Int. Conf. on Intelligent Robots and System, IROS 2007, San Diego, USA (2007)Google Scholar
- 16.Malis, E.: Improving vision-based control using efficient second-order minimization techniques. In: IEEE Int. Conf. on Robotics and Automation, ICRA 2004, New Orleans, vol. 2, pp. 1843–1848 (2004)Google Scholar
- 17.Marchand, E., Chaumette, F.: Feature tracking for visual servoing purposes. Robotics and Autonomous Systems 52(1), 53–70 (2005), http://dx.doi.org/10.1016/j.robot.2005.03.009; Special issue on “Advances in Robot Vision”. In: Kragic, D., Christensen, H. (eds.)CrossRefGoogle Scholar
- 18.Nayar, S., Nene, S., Murase, H.: Subspace methods for robot vision. IEEE Trans. on Robotics 12(5), 750–758 (1996)CrossRefGoogle Scholar
- 19.Papanikolopoulos, N.: Selection of features and evaluation of visual measurements during robotic visual servoing tasks. Journal of Intelligent and Robotic Systems 13, 279–304 (1995)CrossRefGoogle Scholar
- 20.Phong, B.: Illumination for computer generated pictures. Communication of the ACM 18(6), 311–317 (1975)CrossRefGoogle Scholar
- 21.Questa, P., Grossmann, E., Sandini, G.: Camera self orientation and docking maneuver using normal flow. In: SPIE AeroSense 1995, Orlando, Florida, USA, vol. 2488, pp. 274–283 (1995)Google Scholar
- 22.Reichmann, J.: Determination of absorption and scattering coefficients for non homogeneous media. Applied Optics 12, 1811–1815 (1973)CrossRefGoogle Scholar
- 23.Santos-Victor, J., Sandini, G.: Visual behaviors for docking. Computer Vision and Image Understanding 67(3), 223–238 (1997)CrossRefGoogle Scholar
- 24.Sundareswaran, V., Bouthemy, P., Chaumette, F.: Exploiting image motion for active vision in a visual servoing framework. Int. Journal of Robotics Research 15(6), 629–645 (1996)CrossRefGoogle Scholar
- 25.Tahri, O., Mezouar, Y.: On the efficient second order minimization and image-based visual servoing. In: IEEE Int. Conf. on Robotics and Automation, Pasadena, California, pp. 3213–3218 (2008)Google Scholar