Advertisement

Mobile Application with Cloud-Based Computer Vision Capability for University Students’ Library Services

  • Joe Llerena-IzquierdoEmail author
  • Fernando Procel-Jupiter
  • Alison Cunalema-Arana
Conference paper
  • 43 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1277)

Abstract

This paper presents the development of the smart device mobile application “Book’s Recognition”. The app recognizes the text of library book titles at the library of the Universidad Politécnica Salesiana in the city of Guayaquil, Ecuador. Through a service stored in Amazon Web Service (AWS), Mobil Vision’s algorithms for text recognition, and Google’s API on the Android platform, the app “Book’s Recognition” allows its user to recognize the text of the title of a physical book in an innovative and effective way, showing the user basic information about the book in real time. The application can be offered as a service of the library. The purpose of this development is to awaken university student’s interest about new and creative forms of intelligent investigation with resources from the university’s main library, and furthermore to facilitate the investigative process by providing information on non-digitalized, and digitalized, books that hold valuable and relevant information for all generations. The mobile app can be downloaded the following website: https://github.com/seimus96/mobile_vision.

Keywords

Machine vision Text recognition University library services 

Notes

Acknowledgments

We thank the Salesian Polytechnic University; the team of people who work in the library of the university’s Guayaquil branch for carrying out the project and testing the developed prototypes; the university students in academic period 55; the engineering majors for being critical in their responses to the development and implementation of the application which allowed for improvements and a new version of the application; and the GIEACI research group (https://gieaci.blog.ups.edu.ec/) for their support in the methodological-logical research process.

References

  1. 1.
    Kwasitsu, L., Chiu, A.M.: Mobile information behavior of Warner Pacific University students. Libr. Inf. Sci. Res. 41, 139–150 (2019).  https://doi.org/10.1016/j.lisr.2019.04.002CrossRefGoogle Scholar
  2. 2.
    Llerena, J., Andina, M., Grijalva, J.: Mobile application to promote the Malecón 2000 tourism using augmented reality and geolocation. In: Proceedings - 3rd International Conference on Information Systems and Computer Science, INCISCOS 2018 (2018).  https://doi.org/10.1109/INCISCOS.2018.00038
  3. 3.
    Izquierdo, J.L., Alfonso, M.R., Zambrano, M.A., Segovia, J.G.: Mobile application to encourage education in school chess students using augmented reality and m-learning. RISTI Rev. Iber. Sist. e Tecnol. Inf. 2019, 120–133 (2019)Google Scholar
  4. 4.
    Meunier, B.: Library technology and innovation as a force for public good a case study from UCL library services. In: 2018 5th International Symposium on Emerging Trends and Technologies in Libraries and Information Services (ETTLIS), pp. 159–165. IEEE (2018).  https://doi.org/10.1109/ETTLIS.2018.8485242
  5. 5.
    Ravagli, J., Ziran, Z., Marinai, S.: Text recognition and classification in floor plan images. In: 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), pp. 1–6. IEEE (2019).  https://doi.org/10.1109/ICDARW.2019.00006
  6. 6.
    Ziran, Z., Marinai, S.: Object detection in floor plan images. In: Pancioni, L., Schwenker, F., Trentin, E. (eds.) ANNPR 2018. LNCS (LNAI), vol. 11081, pp. 383–394. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-99978-4_30CrossRefGoogle Scholar
  7. 7.
    Marne, M.G., Futane, P.R., Kolekar, S.B., Lakhadive, A.D., Marathe, S.K.: Identification of optimal optical character recognition (OCR) engine for proposed system. In: 2018 Fourth International Conference on Computing Communication Control and Automation (ICCUBEA), pp. 1–4. IEEE (2018).  https://doi.org/10.1109/ICCUBEA.2018.8697487
  8. 8.
    Ozgen, A.C., Fasounaki, M., Ekenel, H.K.: Text detection in natural and computer- generated images. In: 2018 26th Signal Processing and Communications Applications Conference (SIU), pp. 1–4. IEEE (2018).  https://doi.org/10.1109/SIU.2018.8404600
  9. 9.
    Hassan, A.K.A., Mahdi, B.S., Mohammed, A.A.: Iraqi journal of science. University of Baghdad, College of Science (2019)Google Scholar
  10. 10.
    Manasa Devi, M., Seetha, M., Viswanada Raju, S., Srinivasa Rao, D.: Detection and tracking of text from video using MSER and SIFT. In: Satapathy, S.C., Raju, K.S., Shyamala, K., Krishna, D.R., Favorskaya, M.N. (eds.) ICETE 2019. LAIS, vol. 4, pp. 719–727. Springer, Cham (2020).  https://doi.org/10.1007/978-3-030-24318-0_82CrossRefGoogle Scholar
  11. 11.
    Mehta, K., Patel, J., Dubey, N.: Text extraction from book cover using MSER. SSRN Electron. J. (2019).  https://doi.org/10.2139/ssrn.3358207CrossRefGoogle Scholar
  12. 12.
    Qin, S., Manduchi, R.: A fast and robust text spotter. In: 2016 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1–8. IEEE (2016).  https://doi.org/10.1109/WACV.2016.7477663
  13. 13.
    Sharma, M.K., Dhaka, V.S.: Segmentation of handwritten words using structured support vector machine. Pattern Anal. Appl. 23(3), 1355–1367 (2019).  https://doi.org/10.1007/s10044-019-00843-xMathSciNetCrossRefGoogle Scholar
  14. 14.
    Proma, T.P., Hossan, M.Z., Amin, M.A.: Medicine recognition from colors and text. In: Proceedings of the 2019 3rd International Conference on Graphics and Signal Processing - ICGSP ’19, pp. 39–43. ACM Press, New York (2019).  https://doi.org/10.1145/3338472.3338484
  15. 15.
    Epshtein, B., Ofek, E., Wexler, Y.: Detecting text in natural scenes with stroke width transform. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2963–2970. IEEE (2010).  https://doi.org/10.1109/CVPR.2010.5540041
  16. 16.
    NguyenVan, D., Lu, S., Tian, S., Ouarti, N., Mokhtari, M.: A pooling based scene text proposal technique for scene text reading in the wild. Pattern Recogn. 87, 118–129 (2019).  https://doi.org/10.1016/J.PATCOG.2018.10.012CrossRefGoogle Scholar
  17. 17.
    Jaderberg, M., Simonyan, K.: Reading text in the wild with convolutional neural networks. Int. J. Comput. Vis. 116(1), 1–20 (2016)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Gehrmann, S., Strobelt, H., Rush, A.M.: GLTR: Statistical Detection and Visualization of Generated Text, https://arxiv.org/abs/1906.04043 (2019)
  19. 19.
    Tang, Y., Wu, X.: Scene text detection and segmentation based on cascaded convolution neural networks. IEEE Trans. Image Process. 26, 1509–1520 (2017).  https://doi.org/10.1109/TIP.2017.2656474CrossRefGoogle Scholar
  20. 20.
    He, T., Huang, W., Qiao, Y., Yao, J.: Text-attentional convolutional neural network for scene text detection. IEEE Trans. Image Process. 25, 2529–2541 (2016).  https://doi.org/10.1109/TIP.2016.2547588MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Ramos, R.C.B., Villagran, N.V., Yoo, S.G., Quina, G.N.: Software quality assessment applied for the governmental organizations using ISO/IEC 25000. In: 2018 International Conference on eDemocracy & eGovernment (ICEDEG), pp. 311–316. IEEE (2018).  https://doi.org/10.1109/ICEDEG.2018.8372327
  22. 22.
    ISO - ISO/IEC 25000:2014 - Systems and software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Guide to SQuaRE. https://www.iso.org/standard/64764.html. Accessed 05 Jan 2020

Copyright information

© The Author(s), under exclusive license to Springer Nature Switzerland AG 2021

Authors and Affiliations

  1. 1.Universidad Politécnica SalesianaGuayaquilEcuador

Personalised recommendations