3D Point Cloud Video Segmentation Based on Interaction Analysis

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9915)

Abstract

Given the widespread availability of point cloud data from consumer depth sensors, 3D segmentation becomes a promising building block for high level applications such as scene understanding and interaction analysis. It benefits from the richer information contained in actual world 3D data compared to apparent (projected) data in 2D images. This also implies that the classical color segmentation challenges have recently shifted to RGBD data, whereas new emerging challenges are added as the depth information is usually noisy, sparse and unorganized. In this paper, we present a novel segmentation approach for 3D point cloud video based on low level features and oriented to the analysis of object interactions. A hierarchical representation of the input point cloud is proposed to efficiently segment point clouds at the finer level, and to temporally establish the correspondence between segments while dynamically managing the object split and merge at the coarser level. Experiments illustrate promising results for our approach and its potential application in object interaction analysis.

Keywords

Object segmentation 3d point clouds Dynamic split and merge management Object interactions 

Notes

Acknowledgement

This work has been developed in the framework of the project TEC2013-43935-R, financed by the Spanish Ministerio de Economa y Competitividad and the European Regional Development Fund (ERDF).

References

  1. 1.
    Abramov, A., Pauwels, K., Papon, J., Wörgötter, F., Dellen, B.: Depth-supported real-time video segmentation with the kinect. In: 2012 IEEE Workshop on Applications of Computer Vision (WACV), pp. 457–464. IEEE (2012)Google Scholar
  2. 2.
    Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)CrossRefGoogle Scholar
  3. 3.
    Grundmann, M., Kwatra, V., Han, M., Essa, I.: Efficient hierarchical graph-based video segmentation. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2141–2148. IEEE (2010)Google Scholar
  4. 4.
    He, X., Zemel, R.S., Carreira-Perpiñán, M.: Multiscale conditional random fields for image labeling. In. In CVPR 2004. vol. 2, p. II-695. IEEE (2004)Google Scholar
  5. 5.
    Hickson, S., Birchfield, S., Essa, I., Christensen, H.: Efficient hierarchical graph-based segmentation of RGBD videos. In: CVPR2014, IEEE Computer Society (2014)Google Scholar
  6. 6.
    Husain, F., Dellen, B., Torras, C.: Consistent depth video segmentation using adaptive surface models. IEEE Trans. Cybern. 45(2), 266–278 (2015)CrossRefGoogle Scholar
  7. 7.
    Kolmogorov, V., Zabin, R.: What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004)CrossRefGoogle Scholar
  8. 8.
    Koo, S., Lee, D., Kwon, D.S.: Incremental object learning and robust tracking of multiple objects from rgb-d point set data. J. Vis. Commun. Image Represent. 25(1), 108–121 (2014)CrossRefGoogle Scholar
  9. 9.
    Liu, C., Yuen, J., Torralba, A.: Nonparametric scene parsing via label transfer. Pattern Anal. Mach. Intell. 33(12), 2368–2382 (2011)CrossRefGoogle Scholar
  10. 10.
    Papon, J., Abramov, A., Schoeler, M., Worgotter, F.: Voxel cloud connectivity segmentation-supervoxels for point clouds. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2027–2034. IEEE (2013)Google Scholar
  11. 11.
    Pieropan, A., Salvi, G., Pauwels, K., Kjellstrom, H.: Audio-visual classification and detection of human manipulation actions. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2014), pp. 3045–3052. IEEE (2014)Google Scholar
  12. 12.
    Ren, X., Malik, J.: Tracking as repeated figure/ground segmentation. In: Computer Vision and Pattern Recognition, 2007. pp. 1–8. IEEE (2007)Google Scholar
  13. 13.
    Richtsfeld, A., Mörwald, T., Prankl, J., Zillich, M., Vincze, M.: Segmentation of unknown objects in indoor environments. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4791–4796. IEEE (2012)Google Scholar
  14. 14.
    Shi, J., Malik, J.: Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 22(8), 888–905 (2000)CrossRefGoogle Scholar
  15. 15.
    Shotton, J., Winn, J., Rother, C., Criminisi, A.: TextonBoost: joint appearance, shape and context modeling for multi-class object recognition and segmentation. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3951, pp. 1–15. Springer, Heidelberg (2006). doi: 10.1007/11744023_1 CrossRefGoogle Scholar
  16. 16.
    Tsai, D., Flagg, M., Nakazawa, A., Rehg, J.M.: Motion coherent tracking using multi-label MRF optimization. IJCV 100(2), 190–202 (2012)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Xiao, L., Josep, C., Montse, P.: 3d point cloud segmentation oriented to the analysis of interactions. In: 24th European Signal Processing Conference (EUSIPCO 2016), IEEE (2016) (Accepted and to be published)Google Scholar
  18. 18.
    Xu, C., Corso, J.J.: Evaluation of super-voxel methods for early video processing. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1202–1209. IEEE (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Image Processing GroupTechnical University of Catalonia (UPC)BarcelonaSpain

Personalised recommendations