Skip to main content

Advertisement

Log in

A dual approach for machine-awareness in indoor environment combining pseudo-3D imaging and soft-computing techniques

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

3D spatial characterization of objects is a key-step for autonomous knowledge extraction leading to machine-awareness. In this paper we describe a dual-images-based approach for objects’ 3D characterization and localization for Machine-Awareness in indoor environment. The so-called dual images are provided by color and depth cameras of Kinect system, which presents an appealing potential for 3D objects modeling and localization. Placing the human–machine (including human-robot) interaction as a primary outcome of the intended visual Machine-Awareness in investigated system, we aspire proffering the machine spatial awareness about its surrounding environment with identification and semantically description of relative positions of detected items in that environment. In this research work, the aforementioned pseudo-3D imaging and Soft-computing techniques are combined in order to extract, to recognize and to spatially characterize objects and the distance between objects in 3D environment. In other words, on the one hand, a pseudo-3D object modeling and localization method based on the dual-images of Kinect system is proposed, in which the computation of object’s spatial characterization is presented. On the other hand, a computational semantically description algorithm by using Fuzzy Inference System is proposed, in which the numerical-to-semantic conversion for machine awareness is accomplished. Experimental results validating the investigated approach are reported and discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

References

  1. Karray FO, De Silva CW (2004) Soft computing and intelligent systems design: theory, tools and applications, ISBN: 9780321116178, Addison-Wesley Longman

  2. Zadeh LA (1965) Fuzzy sets. Inform. Control 8:338–353

    Article  MATH  Google Scholar 

  3. Mamdani EH, Assilian S (1975) An experiment in linguistic synthesis with a fuzzy logic controller. Int J Man-Mach Stud 7:1–13

    Article  MATH  Google Scholar 

  4. T. Takagi and M. Sugeno (1985) “Fuzzy identification of systems and its applications to modeling and control,” IEEE Tran Syst Man Cybern SMC 15:116–132

  5. Guillaume S (2001) Designing fuzzy inference systems from data: an interpretability-oriented review. IEEE Trans Fuzzy Syst 9(3):426–443

    Article  Google Scholar 

  6. Kinect camera [Online]. Available: http://www.xbox.com/en-US/kinect/default.htm

  7. Zhang Z (2012) Microsoft Kinect sensor and its effect. IEEE Multimedia Mag. 19(2):4–10

    Article  Google Scholar 

  8. S. Meister S, Izadi S, Kohli P, Haemmerle M, Rother C, Kondermann D (2012) “When can we use KinectFusion for ground truth acquisition?”, in Proc. Workshop Color-Depth Camera Fusion Robot

  9. Roth H, Vona M (2012) Moving volume KinectFusion. Proc Br Mach Vision Conf 1–11

  10. Whelan T, Kaess M, Fallon M, Johannsson H, Leonard J, McDonald J (2012) Kintinuous: Spatially extended KinectFusion. Proc Workshop RGB-D Adv Reason Depth Cameras, article 4

  11. Han J, Shao L, Xu D, Shotton J (2013) Enhanced computer vision with microsoft kinect sensor: a review. IEEE Trans Cybern 43(5):1318–1334

    Article  Google Scholar 

  12. Camplani M, Mantecon T, Salgado L (2013) Depth-color fusion strategy for 3-D scene modeling with kinect. IEEE Trans Cybern 43(6):1560–1571

    Article  Google Scholar 

  13. Lloyd R, Mc Closkey S (2014) Recognition of 3D package shapes for single camera metrology. Proc IEEE Winter Confer Appl Comp Vision (IEEE-WACV 2014) 99–106

  14. Skalski A, Machura B (2015) Metrological analysis of microsoft kinect in the context of object localization. J Metril Meas Systems 22(4):469–478

    Google Scholar 

  15. Zolkiewski S, Pioskowik D (2014) Robot control and online programming by human gestures using a kinect motion sensor. New Perspectives in Information Systems and Technologies, Vol. 1, Advances in Intelligent Systems and Computing 275, Springer, pp 593–604

  16. Stone EE, Skubic M (2015) Fall detection in homes of older adults using the microsoft kinect. IEEE J Biomed Health Inform 19(1):290–301

    Article  Google Scholar 

  17. González-Jorge H, Zancajo S, González-Aguilera D, Arias P (2015) Application of kinect gaming sensor in forensic science. J Forensic Sci 60(1):206–211

    Article  Google Scholar 

  18. Khoshelham K, Elberink SO (2012) Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors 12(2):1437–1454

    Article  Google Scholar 

  19. Tuytelaars T, Mikolajczyk K (2008) Local invariant feature detectors: a survey. J Found Trends Comp Graph Vision 3(3):177–280

    Article  Google Scholar 

  20. Hinterstoisser S, Lepetit V, Ilic S, Fua P, Navab N (2010) Dominant orientation templates for real-time detection of texture-less objects. Proc IEEE Confer Comp Vision Patt Recogn (CVPR 2010), pp. 2257–2264

  21. Lowe D (2004) Distinctive image features from scale-in-variant key-points. Int J Comput Vision 20(1):91–110

    Article  Google Scholar 

  22. Y. Ke and R. Sukthankar.PCA-SIFT (2004) A more distinctive representation for local image descriptors. Proc Conf Comp Vision Pattern Recognit, pp 511–517

  23. Bay H, Ess A, Tuytelaars T, Van Gool L (2008) SURF: speeded up robust features. Computer Vision and Image Understanding (CVIU) 110(3):346–359

    Article  Google Scholar 

  24. Rublee E, Rabaud V, Konolige K, Bradski G (2011) ORB: an efficient alternative to SIFT or SURF. Comp Vision (ICCV), IEEE Int Conf IEEE

  25. A. Golovinskiy A, Kim VG, Funkhouser T (2009) Shap-based recognition of 3D point clouds in urban environments. Proc 12th IEEE Int Conf Comp Vision, pp 2154–2161

  26. B. Steder, G. Grisetti, W. Burgard, “Robust Place Recognition for 3D Range Data Based on Point Features”, Proc of IEEE Internationel Conference on Robotics ans Automation (IEEE – ICRA 2010), 3-7 May, pp. 1400-1405, 2010

  27. Ramik DM (2012) Contribution to Complex Visual Information processing and Autonomous Knwledge Extraction: Aplication to Autonomous Robotics. University Paris-Est, PhD disertation

    Google Scholar 

  28. S. Vasudevan, S. Gachter, V. Nguyen, R. Siegwart, “Cognitive maps for mobile robots-an object based approach”, Robotics and Autonomous Systems, pp. 359–371, 2007

  29. C. Galindo, A. Saffiotti, S. Coradeschi, P. Buschka, J. A. Fernandez-Madrigal, “Multi-Hierarchical Semantic Maps for Mobile Robotics”, Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Edmonton, pp. 2278-2283, 2005

  30. M. Persson, T. Duckett, C. Valgren, A. Lilenthal, “Probabilistic Semantic Mapping with a Virtual Sensor for Building/Nature detection”, Proceedings of the 7th IEEE International Symposium on Computational Intelligence in Robotics and Automation, Jacksonville, pp. 236-242, 2007

  31. A. Nüchter, J. Hertzberg, “Towards semantic maps for mobile robots”, Robotics and Autonomous Systems, pp. 915-926, 2008

  32. S. Ekvall, P. Jensfelt, D. Kragic, “Integrating Active Mobile Robot Object Recognition and SLAM in Natural Environments”, Proceedings of International Conference on Intelligent Robots and Systems, IEEE/RSJ, Beijing, 2006

  33. J. Hertzberg, S. Albrecht, M. Günther, K. Lingemann, J. Sprickerhof, W. Thomas, “From Semantic Mapping to Anchored Knowledge Bases”, Proceedings of the 10th Biannual Meeting of German Socciety of Cognitive Science, Symposium Adaptivity of Hybrid Cognitive Systems, Potsdam, pp. 33-37, 2010

  34. J. Civera, D. Gálvez-López, L. Riazuelo, J. D. Tardós, J. M. M. Montiel, “Towards semantic SLAM using a monocular camera”, Proceedings of International Conference on Intelligent Robots and Systems (IROS), pp. 1277-1284, 2012

  35. J. Hartmann, D. Forouher, M. Litza, J. H. Klüssendorff, E. Maehle, “Real-Time Visual SLAM Using FastSLAM and the Microsoft Kinect Camera”, Proc. of 7th German Conference on Robotics (ROBOTIK 2012), 21-22 May, Munich, Germany, pp. 1-6, (2012)

  36. D. Santos Ortiz Correa, D. Fernando Sciotti, M. Gomes Prado, D. Oliva Sales, D. F. Wolf, F. Santos Osório, “Mobile Robots Navigation in Indoor Environments Using Kinect Sensor”, Proc. of 2nd Brazilian Conference on Critical Embedded Systems, 20-25 May, Campinas, Brazil, pp. 36-41, (2012)

  37. Byung-soo Kim, Shili Xu Silvio Savarese, “Accurate Localization of 3D Objects from GRB-D Data using Segmentation Hypotheses”, CVPR’13 Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3182-3189, 2013

  38. K.Lai, L.Bo, X.Ren, D.Fox, “A Large-Scale Hierarchical Multi-View RGB-D Object Dataset”, Proc. Of International Conference on Robotics and Automation (ICRA), 2011

  39. Martin A. Fischler and Robert C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography”, Comm. Of the ACM, vol. 24, pp. 381–395, 1981

  40. Shamik Sural, Gang Qian and Sakti Pramanik, “Segmentation and histogram generation using the HSV color space for image retrivial” Dept. of Computer Science and Engineering, 3115 Engineering Building, Michigan State University, East Lansing, MI 48824, USA

  41. C.Zhang, Z.Zhang, “Calibration between Depth and Color Sensors for Commodity Depth Cameras”, in Intl.Wokshop on Hot Topics in 3D, Barcelona,Spain, 2011

  42. C.D. Herrera, J.Kannala, J.Heikkilä, “Acurate and Patical Caibration of a Depth and Color Camera Pair”, in Proc. 14th Int. Conf. Comp. Anal; Images Patt., LNCS, Vol. 6855, pp. 435-445. Springer, 2011

  43. A. Staranowicz, F. Morbidi, G. L. Mariottini, Depth-camera calibration toolbox(dcct): “Accurate, robust, and pratical calibration of Depths cameras”, in Proc. Of the Brit. Mach. Vision Conf., 2012

  44. N.Burrus Website: http://nicolas.burrus.name/index.php/Research/KinectCalibration

  45. Lu YW, Lai ZH, Fan ZZ, Cui JR, Zhu Q (2015) Manifold Discriminant Regression Learning For Image Classification. Neurocomputing 166:475–486

    Article  Google Scholar 

  46. Han Y, Xu C, Baciu G, Li M (2015) Lightness Biased Cartoon-And-Texture Decomposition for Textile Image Segmentation. Neurocomputing 168:575–587

    Article  Google Scholar 

  47. Wang XZ, Ashfaq RAR, Fu AM (2015) Fuzziness Based Sample Categorization for Classifier Performance Improvement. Journal of Intelligent & Fuzzy Systems 29(3):1185–1196

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kurosh Madani.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Madani, K., Hassan, D. & Sabourin, C. A dual approach for machine-awareness in indoor environment combining pseudo-3D imaging and soft-computing techniques. Int. J. Mach. Learn. & Cyber. 8, 1795–1814 (2017). https://doi.org/10.1007/s13042-016-0559-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-016-0559-2

Keywords

Navigation