Advertisement

Computing Saliency Map from Spatial Information in Point Cloud Data

  • Oytun Akman
  • Pieter Jonker
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6474)

Abstract

Saliency detection in 2D and 3D images has been extensively used in many computer vision applications such as obstacle detection, object recognition and segmentation. In this paper we present a new saliency detection method which exploits the spatial irregularities in an environment. A Time-of-Flight (TOF) camera is used to obtain 3D points that represent the available spatial information in an environment. Two separate saliency maps are calculated by employing local surface properties (LSP) in different scales and the distance between the points and the camera. Initially, residuals representing the irregularities are obtained by fitting planar patches to the 3D points in different scales. Then, residuals within the spatial scales are combined and a saliency map in which the points with high residual values represent non-trivial regions of the surfaces is generated. Also, another saliency map is generated by using the proximity of each point in the point cloud data. Finally, two saliency maps are integrated by averaging and a master saliency map is generated.

Keywords

saliency detection point cloud local surface properties time-of-flight camera 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Koch, C., Ullman, S.: Shifts in selective visual attention: towards the underlying neural circuitry. Human Neurobiology 4, 219–227 (1985)Google Scholar
  2. 2.
    Pashler, H., Johnston, J., Ruthruff, E.: Attention and performance. In: Annual Review of Psychology (2001)Google Scholar
  3. 3.
    Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20, 1254–1259 (1998)CrossRefGoogle Scholar
  4. 4.
    Achanta, R., Hemami, S., Estrada, F., Ssstrunk, S.: Frequency-tuned Salient Region Detection. In: IEEE International Conference on Computer Vision and Pattern Recognition, CVPR (2009)Google Scholar
  5. 5.
    Hou, X., Zhang, L.: Saliency detection: A spectral residual approach. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2007)Google Scholar
  6. 6.
    Sundstedt, V., Debattista, K., Longhurst, P., Chalmers, A., Troscianko, T.: Visual attention for efficient high-fidelity graphics. In: Spring Conference on Computer Graphics (SCCG), pp. 162–168 (2005)Google Scholar
  7. 7.
    Courty, N., Marchand, E., Arnaldi, B.: A new application for saliency maps: Synthetic vision of autonomous actors. In: IEEE International Conference on Image Processing, ICIP (2003)Google Scholar
  8. 8.
    Maki, A., Nordlund, P., Eklundh, J.O.: Attentional scene segmentation: integrating depth and motion. Computer Vision and Image Understanding 78, 351–373 (2000)CrossRefGoogle Scholar
  9. 9.
    Maki, A., Nordlund, P., Eklundh, J.O.: A computational model of depth-based attention. In: International Conference on Pattern Recognition (ICPR), vol. 4, p. 734. IEEE Computer Society, Los Alamitos (1996)CrossRefGoogle Scholar
  10. 10.
    Deville, B., Bologna, G., Vinckenbosch, M., Pun, T.: Guiding the focus of attention of blind people with visual saliency. In: Workshop on Computer Vision Applications for the Visually Impaired, Marseille, France (2008)Google Scholar
  11. 11.
    Frintrop, S., Rome, E., Nchter, A., Surmann, H.: A bimodal laser-based attention system. Computer Vision and Image Understanding, Special Issue on Attention and Performance in Computer Vision 100, 124–151 (2005)CrossRefGoogle Scholar
  12. 12.
    Ouerhani, N., Hugli, H.: Computing visual attention from scene depth. In: International Conference on Pattern Recognition (ICPR), vol. 1, pp. 1375–1378. IEEE Computer Society, Los Alamitos (2000)Google Scholar
  13. 13.
    Sukumar, S., Page, D., Gribok, A., Koschan, A., Abidi, M.: Shape measure for identifying perceptually informative parts of 3d objects. In: International Symposium on 3D Data Processing Visualization and Transmission, pp. 679–686. IEEE Computer Society, Los Alamitos (2006)Google Scholar
  14. 14.
    Hoffman, D.: Salience of visual parts. Cognition 63, 29–78 (1997)CrossRefGoogle Scholar
  15. 15.
    Gal, R., Cohen-Or, D.: Salient geometric features for partial shape matching and similarity. ACM Transactions on Graphics (TOG) 25, 130–150 (2006)CrossRefGoogle Scholar
  16. 16.
    Pauly, M., Gross, M., Kobbelt, L.P.: Efficient simplification of point-sampled surfaces. In: IEEE Conference on Visualization (VIS), pp. 163–170 (2002)Google Scholar
  17. 17.
    Greenspan, H., Belongie, S., Perona, P., Goodman, R., Rakshit, S., Anderson, C.: Overcomplete steerable pyramid filters and rotation invariance. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 222–228 (1994)Google Scholar
  18. 18.
    Dey, T., Li, G., Sun, J.: Normal estimation for point clouds: a comparison study for a voronoi based method. In: Proceedings Eurographics/IEEE VGTC Symposium Point-Based Graphics, pp. 39–46 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Oytun Akman
    • 1
  • Pieter Jonker
    • 1
  1. 1.Delft Biorobotics Laboratory, Department of BioMechanical EngineeringDelft University of TechnologyDelftThe Netherlands

Personalised recommendations