Abstract
This paper presents the design and implementation of a practical visual servo. The visual servo has been designed for pose correction of an end-effector that is placed wrongly away from the test point on a printed circuit board (PCB). It is widely acknowledged that the error could be attributed to the connecting joints of robot arms, angular errors, geometric model, and/or camera projection model. In the automated fault insertion test (AFIT), the typical difficulty encountered by a robot is to servo and place the probe tip accurately on the targeted test point, i.e., the conductive pad on the PCB. Conventionally, touch and sense with a probe tip has been utilized. Upon detection of a failure, the visual servo is triggered and the robot literally looks through a camera and re-attempts with the use of visual information from the camera. Currently, no clearly defined specifications in carrying out feature extraction exist and thereby the test point detection as it is not cost-effective to designate part of PCB footprint as the test point separately from the main design to accommodate visual sensing. As a result, various kinds of test points have been implanted into PCB industrial design. This research work requires building of a custom knowledge base of the test point features to support image recognition. Furthermore, the operational factors in industrial manufacturing, e.g., head structure maintenance, the replacement of end-effectors and the changes of projection parameters caused by hardware adjustment, impact the accuracy of the probe placement. These factors cause the original geometric model to shift from the original configuration and thus errors. This research paper proposes the practical design of a closed-looped visual servo to address the issues of precision error, image feature extraction, and manufacturing factors. Besides, the paper details the findings from the design phase to the implementation phase.
Similar content being viewed by others
References
S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Trans. on Robotics and Automation, vol. 12, pp. 651–670, October 1996.
B. H. Yoshimi and P. K. Allen, “Alignment using an uncalibrated camera system,” IEEE Trans. on Robotics and Automation, vol. 11, no. 4, pp. 516–521, August 1995.
Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330–1334, November 2000.
“National Instruments Corporation,” NI Vision Concepts Manual, November 2005.
“National Instruments Corporation,” NI Vision for LabVIEW User Manual, November 2005.
F. Chaumette and S. Hutchinson, “Visual servo control. I: basic approaches,” IEEE Robotics and Automation Magazine, vol. 13, no. 4, pp. 82–90, December 2006.
M. Vincze and G. D. Hager, Robust Vision for Vision-based Control of Motion, IEEE Press, 2000.
E. R. Davies, Machine Vision: Theory Algorithms, Practicalities, 2nd edition, Academic Press.
G. Shakhnarovich, T. Darrell, and P. Indyk, Nearest-neighbor Methods in Learning and Vision: Theory and Practice, MIT Press, 2006.
X. Gao, C. Zhao, G. Chang, and Z. Tan, “Design and implementation of drill hole of PCB detection system,” Proc. of IEEE 9th International Conference on Hybrid Intelligent Systems, vol. 2, pp. 462–466, September 2009.
Q. Xu, D. Ye, R. Che, and Y. Huang, “Accurate camera calibration with new minimizing function,” Proc. of IEEE International Conference on Robotics and Biomimetics, pp. 779–784, December 2006.
A. Datta, J. S. Kim, and T. Kanade, “Accurate camera calibration using iterative refinement of control points,” Proc. of IEEE 12th International Conference on Computer Vision Workshops, pp. 1201–1208, 2009.
J. Heikkila, “Geometric camera calibration using circular control points,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1066–1077, October 2000.
A. K. Ray, M. Agarwal, and L. Behera, “Kinematic control of robot manipulators using visual feedback,” Proc. of the IEEE International Symposium on Intelligent Control, pp. 3076–3081, October 2006.
P. R. Giordano, A. Stemmer, K. Arbter, and A. Albu-Schaffer, “Robotic assembly of complex planar parts: an experimental evaluation,” IEEE Intelligent Robots and Systems on Digital Object Identifier, pp. 3775–3782, September 2008.
A. Becker and T. Bretl, “Approximate steering of a unicycle under bounded model perturbation using ensemble control,” IEEE Trans. on Robotics, vol. 28, no. 3, pp. 580–591, 2012.
M. Sezgin and B. Sankur, “Survey over image thresholding techniques and quantitative performance evaluation,” Journal of Electronic Imaging, vol. 13, pp. 146–165, January 2004.
H. F. Ng, “Automatic thresholding for defect detection,” Proc. of the 3rd International Conference on Image and Graphics, pp. 532–534, 2004.
T. Banlue, P. Sooraksa, and S. Nopppanakeepong, “A practical visual servo calibration of end-effector pose estimation in industrial manufacturing,” Proc. of International Conference on Engineering, Applied Sciences, and Technology, November 2012.
L. Gracia and C. Perez-Vidal, “A new control scheme for visual servoing,” International Journal of Control, Automation, and Systems, vol. 7, no. 5, pp. 764–776, October 2009.
W. Yu, “Stability analysis of visual servoing with sliding-mode estimation and neural compensation,” International Journal of Control, Automation, and Systems, vol. 4, no. 5, pp. 545–558, October 2006.
Y. Zhao, W. F. Xie, and S. Liu, “Image-based visual servoing using improved image moments in 6-dof robot systems,” International Journal of Control, Automation, and Systems, vol. 11, no. 3, pp. 586–596, June 2013.
J. Ha, “An image processing algorithm for the automatic manipulation of tie rod,” International Journal of Control, Automation, and Systems, vol. 11, no. 5, pp. 984–990, October 2013.
Author information
Authors and Affiliations
Corresponding author
Additional information
Tassanai Banlue is a D.Eng candidate in Electrical Engineering at the King Mongkut’s Institute of Technology Ladkrabang, Bangkok, Thailand. His main research interests include machine vision and wireless communications. He has been implementing automation to test electronics products for a number of customers (Oracle, IBM, Alcatel-Lucent, EMC and Cisco). He received his B.Ind and M.Eng from KMITL, both in Electrical Engineering.
Pitikhate Sooraksa is currently an Associate Professor of Electrical Engineering at the School of Computer Engineering and Information Science, Faculty of Engineering, King Mongkut’s Institute of Technology Ladkrabang (KMITL), Ladkrabang, Bangkok, Thailand. His research interests include IT-mechatronics, development of rapid prototypes in embedded systems and computer-aided control. He received his B.Ed. (Hons), M.Sc. in Physics from Srinakharinwirot University, an M.S. from George Washington University (1992) and a Ph.D. from University of Houston (1996), both in Electrical Engineering.
Suthichai Noppanakeepong received his B.Eng. degree in Telecommunications Engineering and his M.Eng. degree in Electrical Engineering from King Mongkut’s Institute of Technology Ladkrabang (KMITL), Thailand, and his Ph.D. degree from the Tokyo Institute of Technology, Tokyo, Japan, in 1984, 1989, and 1996, respectively. He is currently an Associate Professor of Electrical Engineering at Optical Laboratory in Faculty of Telecommunications, King Mongkut’s Institute of Technology Ladkrabang, Ladkrabang, Bangkok, Thailand. His research interests include optical fiber communications and radio wave propagation.
Rights and permissions
About this article
Cite this article
Banlue, T., Sooraksa, P. & Noppanakeepong, S. A practical position-based visual servo design and implementation for automated fault insertion test. Int. J. Control Autom. Syst. 12, 1090–1101 (2014). https://doi.org/10.1007/s12555-013-0128-3
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12555-013-0128-3