Advertisement

Neural Computing and Applications

, Volume 22, Issue 7–8, pp 1267–1277 | Cite as

Shape space estimation by higher-rank of SOM

  • Sho Yakushiji
  • Tetsuo FurukawaEmail author
ICONIP 2011

Abstract

The aim of this study is to develop an estimation method for a shape space. In this work, “shape space” means a nonlinear subspace formed by a class of visual shapes, in which the continuous change in shapes is naturally represented. By using the shape space, various operations dealing with shapes, such as identification, classification, recognition, and interpolation can be carried out in the shape space. This paper introduces an algorithm based on a generative model of shapes. A higher-rank of the self-organizing map (SOM2) is used to implement the shape space estimation method. We use this method to estimate the shape space of artificial contours. In addition, we present results from a simulation of omnidirectional camera images taken from mobile robots. Our technique accurately predicts changes in image properties as the robot’s attitude changes. Finally, we consider the addition of local features to our method. We show that the inclusion of local features solves the correspondence problem. These results suggest the potential of our technique in the future.

Keywords

Shape representation Shape space Self-organizing map Higher-rank of SOM 

Notes

Acknowledgments

This work is supported by KAKENHI 23500280 and KAKENHI 22240022.

References

  1. 1.
    Lin Z, Davis LS (2010) Shape-based human detection and segmentation via hierarchical part-template matching. IEEE Trans Pattern Anal Mach Intell 32(4):604–618CrossRefGoogle Scholar
  2. 2.
    Mahoor MH, Abdel-Mottaleb M (2005) Classification and numbering of teeth in dental bitewing images. Pattern Recognit 38(4):577–586CrossRefGoogle Scholar
  3. 3.
    Wei CH, Li Y, Chau WY, Li CT (2009) Trademark image retrieval using synthetic features for describing global shape and interior structure. Pattern Recognit 42(3):386–394zbMATHCrossRefGoogle Scholar
  4. 4.
    Macrini D, Dickinson S, Fleet D, Siddiqi K (2011) Bone graphs: medial shape parsing and abstraction. Comput Vis Image Underst 115(7):1044–1061CrossRefGoogle Scholar
  5. 5.
    Loncaric S (1998) A survey of shape analysis techniques. Pattern Recognit 31(8):983–1001CrossRefGoogle Scholar
  6. 6.
    Zhang D, Lu G (2004) Review of shape representation and description techniques. Pattern Recognit 37(1):1–19Google Scholar
  7. 7.
    Kwok JT, Tsang IW (2004) The pre-image problem in kernel methods. IEEE Trans Neural Netw 15(6):1517–1525CrossRefGoogle Scholar
  8. 8.
    Datta A, Pal T, Parui SK (1997) A modified self-organizing neural net for shape extraction. Neurocomputing 14(1):3–14Google Scholar
  9. 9.
    Kumar GS, Kalra PK, Dhande SG (2004) Curve and surface reconstruction from points: an approach based on self-organizing maps. Appl Soft Comput 5(1):55–66Google Scholar
  10. 10.
    Furukawa T (2009) SOM of SOMs. Neural Netw 22(4):463–478CrossRefGoogle Scholar
  11. 11.
    Bishop CM, Svensén M, Williams CKI (1998) GTM: the generative topographic mapping. Neural Comput 10:215–234Google Scholar
  12. 12.
    Heskes T (2001) Self-organizing maps, vector quantization, and mixture modeling. IEEE Trans Neural Netw 12:1035–1299CrossRefGoogle Scholar
  13. 13.
    Verbeek JJ, Vlassis N, Kröse BJA (2005) Self-organizing mixture models. Neurocomputing 63:99–123CrossRefGoogle Scholar
  14. 14.
    Belongie S, Malik J, Puzicha J (2002) Shape matching and object recognition using shape contexts. IEEE Trans Pattern Anal Mach Intell 24:509–522CrossRefGoogle Scholar
  15. 15.
    Otani M, Gunya T, Furukawa T (2009) Line image classification by NGxSOM: application to handwritten character recognition. Lect Notes Comput Sci 5629:219–227CrossRefGoogle Scholar
  16. 16.
    Michel O (2004) Webots: professional mobile robot simulation. Int J Adv Robot Syst 1(1):39–42Google Scholar

Copyright information

© Springer-Verlag London Limited 2012

Authors and Affiliations

  1. 1.Department of Brain Science and EngineeringKyushu Institute of TechnologyKitakyushuJapan

Personalised recommendations