Multisensor Fusion for Computer Vision pp 171-194 | Cite as
Fusion of Range and Intensity Image Data for Recognition of 3D object surfaces
Abstract
Intensity image and range image information are commonly used as input data for robot vision. These two kinds of information have advantages and disadvantages for extracting image features. The purpose of this work is to show how fusion of the two images obtained simultaneously from a laser scanner leads to more reliable 3D object recognition.
The fusion is performed on two levels; edge and curvature analysis. Most edges can be extracted more easily from an intensity image than from a range image. But some edge attributes describing 3D information of an object in a scene can only be obtained from the range image. A group of such edge attributes was defined in this work so that a complementary fusion could be made. The curvature of object surfaces was classified into curvature classes. These classes were defined so that they would remain invariant in both images. This makes it possible to verify curvature information of one image with that of the other image. Therefore, a concurrent fusion could be performed. Edges that could not be extracted from the intensity image could be extracted from the curvature analysis in the range image. Because the curvature analysis is more time consuming than the edge analysis, it is performed by the system control unit only when necessary.
A feedback of the match determines whether curvature analysis should be applied. If a match between the object model and the image features fails because of insufficient or incorrect feature information, the curvature analysis is activated and either the curvature information or a missing edge is located, upon which a new match can be made.
Preview
Unable to display preview. Download preview PDF.
References
- Besl, P.J. and Jain, R.C. “Three-Dimensional Object Recognition” Computing Surveys, Vol. 17, No. 1, p75–145, 1985CrossRefGoogle Scholar
- Duda, R.O & Hart, P.E. “Pattern Classification and Scene Analysis” Wiley-Inter-science, New York, 1973Google Scholar
- Gil, B. & A. Mitiche & J.K. Aggarwal “Experiments in Combining Intensity and Range Edge Maps” Computer Vision, Graphics, and Image Processing Vol. 21 p395–411, 1983CrossRefGoogle Scholar
- Hagel, H.-H. “Displacement Vectors Derived from Second-Order Intensity Variations in Image Sequences” Computer Vision, Graphics, and Image Processing 21, p85–117, 1983CrossRefGoogle Scholar
- Haralick, R.M., and L.T. Watson and T.J. Laffey “The Topographic Primal Sketch” The International Journal of Robotics Research Vol. 2, No. 1 p50–71, Spring Verlag 1983CrossRefGoogle Scholar
- Hough, P.V.C. “Method and Means for Recognizing Complex Patterns” U.S. Patent 306–654, 1962Google Scholar
- Leu, J.G., Sethi, I.K. and Hong, T. “Object Surface Characterization from Range Images” Proceedings of SPIE–The International Society for Optical Engineering Vol. 956 p41–49, 1988CrossRefGoogle Scholar
- Mitiche, A. and Aggarwal, J.K. “Detection of edges using range information” IEEE Trans. Pattern Anal. Machine Intell. PAMI-5, 2 (Mar.), p174–178, 1983CrossRefGoogle Scholar
- Prewitt, J.M.S. “Object enhancement and extraction” Picture Processing and Psychopictorics, B.S. Lipkin and A. Rosenfold ( Eds.) New York: Academic Press. 1970Google Scholar
- Tamura, H. “SPIDER” Subroutine Package for Image Data Enhancement and Recognition JSD Joint System Development Corp. 1983Google Scholar
- Waltz, D. “Understanding line drawings of scenes with shadows” The Psychology of Computer Vision. p19–91.McGraw-Hill, New York, 1975Google Scholar
- Winston, P.H. Artificial Intelligence Addison-Wesley Publishing Company, Inc. 1984Google Scholar