Skip to main content
Log in

Bioinspired point cloud representation: 3D object tracking

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

The problem of processing point cloud sequences is considered in this work. In particular, a system that represents and tracks objects in dynamic scenes acquired using low-cost sensors such as the Kinect is presented. An efficient neural network-based approach is proposed to represent and estimate the motion of 3D objects. This system addresses multiple computer vision tasks such as object segmentation, representation, motion analysis and tracking. The use of a neural network allows the unsupervised estimation of motion and the representation of objects in the scene. This proposal avoids the problem of finding corresponding features while tracking moving objects. A set of experiments are presented that demonstrate the validity of our method to track 3D objects. Moreover, an optimization strategy is applied to achieve real-time processing rates. Favorable results are presented demonstrating the capabilities of the GNG-based algorithm for this task. Some videos of the proposed system are available on the project website (http://www.dtic.ua.es/~sorts/3d_object_tracking/).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. Kinect for XBox 360: http://www.xbox.com/kinect Microsoft.

  2. http://www.dtic.ua.es/~sorts/3d_object_tracking/.

References

  1. Yang H, Shao L, Zheng F, Wang L, Song Z (2011) Recent advances and trends in visual tracking: a review. Neurocomputing 74(18):3823–3831. doi:10.1016/j.neucom.2011.07.024

    Article  Google Scholar 

  2. Han J, Shao L, Xu D, Shotton J (2013) Enhanced computer vision with microsoft kinect sensor: a review. IEEE Trans Cybern 43(5):1318–1334. doi:10.1109/tcyb.2013.2265378

    Article  Google Scholar 

  3. Zhang Z, Liu W, Metsis V, Athitsos V (2012) A viewpoint-independent statistical method for fall detection. In: 2012 21st International conference on Pattern recognition (ICPR), pp 3626–3630, ISSN 1051-4651

  4. Wang P, Ma S, Shen Y (2014) Performance study of feature descriptors for human detection on depth map. Int J Model Simul Sci Comput 05(03):1450003

    Article  Google Scholar 

  5. Spinello L, Arras KO (2011) People detection in RGB-D data. In: International Conference on IEEE/RSJ

  6. Spinello L, Stachniss C, Burgard W (2012) Scene in the loop: toward adaptation-by-tracking in RGB-D data. In: Proceedings of the workshop on RGB-D: advanced reasoning with depth cameras (RSS)

  7. Song S, Xiao J (2013) Tracking revisited using RGBD camera: unified benchmark and baselines. In: Proceedings of the 2013 IEEE international conference on computer vision, ICCV ’13. IEEE Computer Society, Washington, pp 233–240

  8. Teichman A, Lussier J, Thrun S (2013) Learning to segment and track in RGBD. IEEE Trans Autom Sci Eng 10(4):841–852. doi:10.1109/TASE.2013.2264286

    Article  Google Scholar 

  9. Liu Y, Li H, Chen YQ (2012) Automatic tracking of a large number of moving targets in 3D. In: Proceedings of the 12th European conference on computer vision—volume part IV, ECCV’12. Springer-Verlag, Berlin, pp 730–742, ISBN 978-3-642-33764-2

  10. Gupta A, Shafaei A, Little J, Woodham R (2014) Unlabelled 3D motion examples improve cross-view action recognition. In: Proceedings of the British machine vision conference. BMVA Press

  11. Crivellaro A, Lepetit V (2014) Robust 3D tracking with descriptor fields. In: Proceedings of 2014 IEEE conference on computer vision and pattern recognition (CVPR), pp 3414–3421. doi:10.1109/CVPR.2014.436

  12. Quiroga J, Brox T, Devernay F, Crowley J (2014) Dense semi-rigid scene flow estimation from RGBD images. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision ECCV 2014, vol 8695., Lecture notes in computer scienceSpringer International Publishing, New York, pp 567–582

    Google Scholar 

  13. Herbst E, Ren X, Fox D (2013) RGB-D flow: dense 3-D motion estimation using color and depth. In: 2013 IEEE international conference on robotics and automation, Karlsruhe, 6–10 May 2013, pp 2276–2282

  14. Faion F, Baum M, Hanebeck U (2012) Tracking 3D shapes in noisy point clouds with random hypersurface models. In: 2012 15th international conference on information fusion (FUSION), pp 2230–2235

  15. Yuheng Ren C, Prisacariu V, Murray D, Reid I (2013) STAR3D: simultaneous tracking and reconstruction of 3D objects using RGB-D data. In: The IEEE international conference on computer vision (ICCV)

  16. Park Y, Lepetit V, Woo W (2011) Texture-less object tracking with online training using an RGB-D camera. In: 2011 10th IEEE international symposium on mixed and augmented reality (ISMAR), pp 121–126. doi:10.1109/ISMAR.2011.6092377

  17. Kyriazis N, Argyros A (2014) Scalable 3D tracking of multiple interacting objects. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 3430–3437

  18. Fritzke B (1997) A self-organizing network that can follow non-stationary distributions. In: Artificial neural networks ICANN’97, vol. 1327 of Lecture notes in computer science. Springer, Berlin, pp 613–618

  19. Fritzke B (1995) A growing neural gas network learns topologies. Adv Neural Inf Process Syst 7:625–632

    Google Scholar 

  20. Frezza-Buet H (2008) Following non-stationary distributions by controlling the vector quantization accuracy of a growing neural gas network. Neurocomputing 71:1191–1202

    Article  Google Scholar 

  21. Cao X, Suganthan PN (2002) Hierarchical overlapped growing neural gas networks with applications to video shot detection and motion characterization. In: Proceedings of international joint conference neural networks IJCNN ’02, vol. 2, pp 1069–1074

  22. Frezza-Buet H (2014) Online computing of non-stationary distributions velocity fields by an accuracy controlled growing neural gas. Neural Netw 60:203–221

    Article  Google Scholar 

  23. Coleca F, State A, Klement S, Barth E, Martinetz T (2015) Self-organizing maps for hand and full body tracking. Neurocomputing 147(0):174 -184, advances in self-organizing maps subtitle of the special issue: selected papers from the workshop on self-organizing maps 2012 (WSOM 2012)

  24. Garcia-Rodriguez J, Garcia-Chamizo JM (2011) Surveillance and human-computer interaction applications of self-growing models. Appl Soft Comput 11(7):4413–4431

    Article  Google Scholar 

  25. Garcia-Rodriguez J, Orts-Escolano S, Angelopoulou A, Psarrou A, Azorin-Lopez J, Garcia-Chamizo J (2014) Real time motion estimation using a neural architecture implemented on GPUs. J Real Time Image Process 1–19

  26. Fišer D, Faigl J, Kulich M (2013) Growing neural gas efficiently. Neurocomputing 104:72–82. doi:10.1016/j.neucom.2012.10.004

    Article  Google Scholar 

  27. Orts S, Garcia-Rodriguez J, Viejo D, Cazorla M, Morell V (2012) GPGPU implementation of growing neural gas: application to 3D scene reconstruction. J Parallel Distrib Comput 72(10):1361–1372

    Article  Google Scholar 

  28. Martinetz TM, Berkovich SG, Schulten KJ (1993) ‘Neural-gas’ network for vector quantization and its application to time-series prediction. IEEE Trans Neural Netw 4(4):558–569

    Article  Google Scholar 

  29. Orts-Escolano S, Morell V, Garcia-Rodriguez J, Cazorla M (2013) Point cloud data filtering and downsampling using growing neural gas. In: The 2013 international joint conference on neural networks, IJCNN 2013, Dallas, 4–9 Aug 2013, pp 1–8

  30. Garcia-Rodriguez J, Cazorla M, Orts-Escolano S, Morell V (2013) Improving 3D keypoint detection from noisy data using growing neural gas. In: Proceedings of advances in computational intelligence— 12th international work-conference on artificial neural networks, IWANN 2013, Puerto de la Cruz, 12–14 June 2013, Part II, pp 480–487

  31. Do Rego RLME, Araujo AFR, De Lima Neto FB (2010) Growing self-reconstruction maps. Trans Neural Netw 21(2):211–223

    Article  Google Scholar 

  32. Orts-Escolano S, Garcia-Rodriguez J, Moreli V, Cazorla M, Garcia-Chamizo J (2014) 3D colour object reconstruction based on Growing Neural Gas. In: 2014 International joint conference on neural networks (IJCNN), pp 1474–1481. doi:10.1109/IJCNN.2014.6889546

  33. Gschwandtner M, Kwitt R, Uhl A, Pree W (2011) BlenSor: blender sensor simulation toolbox advances in visual computing. vol. 6939 of Lecture notes in computer science, chap. 20. Springer, Berlin, pp 199–208

Download references

Acknowledgments

This work was partially funded by the Spanish Government DPI2013-40534-R Grant.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jose Garcia-Rodriguez.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Orts-Escolano, S., Garcia-Rodriguez, J., Cazorla, M. et al. Bioinspired point cloud representation: 3D object tracking. Neural Comput & Applic 29, 663–672 (2018). https://doi.org/10.1007/s00521-016-2585-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-016-2585-0

Keywords

Navigation