A Binary Descriptor Invariant to Rotation and Robust to Noise (BIRRN) for Floor Recognition

  • J. A. de Jesús Osuna-CoutiñoEmail author
  • Jose Martinez-Carranza
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11524)


Floor recognition is a conventional task in computer vision with several applications in different fields, from augmented reality to autonomous driving. To address this problem, there is a plethora of methods, several of them based on the use of visual descriptors. However, most previous work has low robustness under image degradation. One alternative to address image degradation problems is the use of binary descriptors. Unfortunately, these descriptors are sensitive to noise. In addition, these descriptors use only some pixels within a patch, this limits the floor recognition scope since useful information is available just for a small pixel set. To cope with these problems, we propose a new texture descriptor based on binary patterns suitable for floor recognition. This descriptor is robust to noise, robust to illumination changes, invariant to rotation and it considers a larger number of pixels than the used in the previous LBP-based approaches. Experimental results are encouraging, the proposed texture descriptor reach high performance under several real-world scenarios, 7.4% more recall and 3.7% \(F-score\) than previous texture descriptors and it has high robustness under image degradation.


Binary descriptor Floor recognition Urbanized scenes 


  1. 1.
    Okada, K., Inaba, M., Inoue, H.: Walking navigation system of humanoid robot using stereo vision based floor recognition and path planning with multi-layered body image. In: IROS, pp. 2155–2160 (2003)Google Scholar
  2. 2.
    Goncalves, R., Reis, J., Santana, E., Carvalho, N.B., Pinho, P., Roselli, L.: Smart floor: indoor navigation based on RFID. In: WPT (2013)Google Scholar
  3. 3.
    Hoiem, D., Efros, A.A., Hebert, M.: Automatic photo pop-up. ACM Trans. Graph. 24, 577–584 (2005)CrossRefGoogle Scholar
  4. 4.
    Sánchez, C., Taddei, P., Ceriani, S., Wolfart, E., Sequeira, V.: Localization and tracking in known large environments using portable real-time 3D sensors. Comput. Vis. Image Underst. 149, 197–208 (2016)CrossRefGoogle Scholar
  5. 5.
    Martel, J.N., Sandamirskaya, Y., Dudek, P.: A demonstration of tracking using dynamic neural fields on a programmable vision chip. In: ICDSC, pp. 212–213 (2016)Google Scholar
  6. 6.
    Khaliq, A.A., Pecora, F., Saffiotti, A.: Children playing with robots using stigmergy on a smart floor. In: (UIC/ATC/ScalCom/CBDCom/IoP/SmartWorld) (2016)Google Scholar
  7. 7.
    Serra, R., Knittel, D., Di Croce, P., Peres, R.: Activity recognition with smart polymer floor sensor: application to human footstep recognition. IEEE Sens. J. 16, 5757–5775 (2016)CrossRefGoogle Scholar
  8. 8.
    Kang, S.H., et al.: Implementation of Smart Floor for multi-robot system. In: IEEE International Conference ICARA, pp. 46–51 (2011)Google Scholar
  9. 9.
    Zhang, H., Ye, C.: An indoor wayfinding system based on geometric features aided graph SLAM for the visually impaired. IEEE Trans. Neural Syst. Rehabil. Eng. 25, 1592–1604 (2017)CrossRefGoogle Scholar
  10. 10.
    Dai, J., He, K., Sun, J.: Instance-aware semantic segmentation via multi-task network cascades. In: CVPR, pp. 3150–3158 (2016)Google Scholar
  11. 11.
    Ojala, T., Pietikainen, M., Maenpaa, T.: Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Anal. Mach. Intell. 971–987 (2002)Google Scholar
  12. 12.
    Heikkila, M., Pietikainen, M., Schmid, C.: Description of interest regions with local binary patterns. Pattern Recogn. 12, 425–436 (2009)CrossRefGoogle Scholar
  13. 13.
    Guillaume-Alexandre, B., Jean-Philippe, J., Nicolas, S.: Change detection in feature space using local binary similarity patterns. In: Computer and Robot Vision (CRV), pp. 106–112 (2013)Google Scholar
  14. 14.
    Silva, C., Bouwmans, T., Frélicot, C.: An extended center-symmetric local binary pattern for background modeling and subtraction in videos. In: International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (2015)Google Scholar
  15. 15.
    Deng, L., Yang, M., Qian, Y., Wang, C., Wang, B.: CNN based semantic segmentation for urban traffic scenes using fisheye camera. In: IEEE Intelligent Vehicles Symposium, pp. 231–236 (2017)Google Scholar
  16. 16.
    Cherian, A., Morellas, V., Papanikolopoulos, N.: Accurate 3D ground plane estimation from a single image. In: IEEE International Conference ICRA, pp. 2243–2249 (2009)Google Scholar
  17. 17.
    Jesús Osuna-Coutiño, J.A. de., Martinez-Carranza, J., Arias-Estrada, M., Mayol-Cuevas, W.: Dominant plane recognition in interior scenes from a single image. In: ICPR, pp. 1923–1928 (2016)Google Scholar
  18. 18.
    Jesús Osuna-Coutiño, J.A. de., Cruz-Martínez, C., Martinez-Carranza, J., Arias-Estrada, M., Mayol-Cuevas, W.: I want to change my floor: dominant plane recognition from a single image to augment the scene. In: ISMAR, pp. 135–140 (2016)Google Scholar
  19. 19.
    Craft, R.C., Leake, C.: The pareto principle in organizational decision making. IManagement Decis. 40, 729–733 (2002)Google Scholar
  20. 20.
    Ng, A.: Coursera, Stanford, Machine Learning (2017).
  21. 21.
    Philipp, K., Vladlen, K.: Efficient inference in fully connected CRFs with gaussian edge potentials. In: Advances in Neural Information Processing Systems (2011)Google Scholar
  22. 22.
    Saxena, A., Sun, M., Ng, A.Y.: Make3D: learning 3D scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell. 31, 824–840 (2009)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • J. A. de Jesús Osuna-Coutiño
    • 1
    Email author
  • Jose Martinez-Carranza
    • 1
    • 2
  1. 1.Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE)CholulaMexico
  2. 2.University of BristolBristolUK

Personalised recommendations