A Reading Assistant System for Blind People Based on Hand Gesture Recognition

  • Qiang Lu
  • Guangtao ZhaiEmail author
  • Xiongkuo Min
  • Yucheng Zhu
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 1181)


A reading assistant system for blind people based on hand gesture recognition is proposed in this paper. This system consists of seven modules: camera input module, page adjustment module, page information retrieval module, hand pose estimation module, hand gesture recognition module, media controller and audio output device. In the page adjustment module, Hough line detection and local OCR (Optical Character Recognition) are used to rectify text orientation. In the hand gesture recognition module, we propose three practical methods: geometry model, heatmap model and keypoint model. Geometry model recognizes different gestures by geometrical characteristics of hand. Heatmap model which is based on image classification algorithm uses CNN (Convolutional Neural Network) to classify various hand gestures. To simplify the networks in heatmap model, we extract 21 keypoints from a hand heatmap and make them a dataset of points coordinates for training classifier. These three methods can get good results of gesture recognition. By recognizing gestures, our designed system can realize perfect reading assistant function.


Reading assistant OCR Hand gesture recognition Convolutional pose machine Heatmap Keypoint Geometry 


  1. 1.
    Abche, A.B., Yaacoub, F., Maalouf, A., Karam, E.: Image registration based on neural network and Fourier transform. In: 2006 International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 4803–4806. IEEE (2006)Google Scholar
  2. 2.
    Brabyn, J., Crandall, W., Gerrey, W.: Remote reading systems for the blind: a potential application of virtual presence. In: 1992 14th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 4, pp. 1538–1539. IEEE (1992)Google Scholar
  3. 3.
    Cao, Z., Hidalgo, G., Simon, T., Wei, S.E., Sheikh, Y.: OpenPose: realtime multi-person 2D pose estimation using part affinity fields. arXiv preprint arXiv:1812.08008 (2018)
  4. 4.
    Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7291–7299 (2017)Google Scholar
  5. 5.
    Chaman, S., D’souza, D., D’mello, B., Bhavsar, K., D’souza, T.: Real-time hand gesture communication system in Hindi for speech and hearing impaired. In: 2018 Second International Conference on Intelligent Computing and Control Systems (ICICCS), pp. 1954–1958. IEEE (2018)Google Scholar
  6. 6.
    Felix, S.M., Kumar, S., Veeramuthu, A.: A smart personal AI assistant for visually impaired people. In: 2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI), pp. 1245–1250. IEEE (2018)Google Scholar
  7. 7.
    Hu, M., Chen, Y., Zhai, G., Gao, Z., Fan, L.: An overview of assistive devices for blind and visually impaired people. Int. J. Robot. Autom. 34(5), 580–598 (2019)Google Scholar
  8. 8.
    Islam, M.R., Mitu, U.K., Bhuiyan, R.A., Shin, J.: Hand gesture feature extraction using deep convolutional neural network for recognizing American sign language. In: 2018 4th International Conference on Frontiers of Signal Processing (ICFSP), pp. 115–119. IEEE (2018)Google Scholar
  9. 9.
    Jirasuwankul, N.: Effect of text orientation to OCR error and anti-skew of text using projective transform technique. In: 2011 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), pp. 856–861. IEEE (2011)Google Scholar
  10. 10.
    O’day, B.L., Killeen, M., Iezzoni, L.I.: Improving health care experiences of persons who are blind or have low vision: suggestions from focus groups. Am. J. Med. Qual. 19(5), 193–200 (2004)CrossRefGoogle Scholar
  11. 11.
    Rajput, R., Borse, R.: Alternative product label reading and speech conversion: an aid for blind person. In: 2017 International Conference on Computing, Communication, Control and Automation (ICCUBEA), pp. 1–6. IEEE (2017)Google Scholar
  12. 12.
    Sabab, S.A., Ashmafee, M.H.: Blind reader: an intelligent assistant for blind. In: 2016 19th International Conference on Computer and Information Technology (ICCIT), pp. 229–234. IEEE (2016)Google Scholar
  13. 13.
    Simon, T., Joo, H., Matthews, I., Sheikh, Y.: Hand keypoint detection in single images using multiview bootstrapping. In: CVPR (2017)Google Scholar
  14. 14.
    Sun, J.H., Ji, T.T., Zhang, S.B., Yang, J.K., Ji, G.R.: Research on the hand gesture recognition based on deep learning. In: 2018 12th International Symposium on Antennas, Propagation and EM Theory (ISAPE), pp. 1–4. IEEE (2018)Google Scholar
  15. 15.
    Wei, S.E., Ramakrishna, V., Kanade, T., Sheikh, Y.: Convolutional pose machines. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4724–4732 (2016)Google Scholar
  16. 16.
    Yi, C., Tian, Y., Arditi, A.: Portable camera-based assistive text and product label reading from hand-held objects for blind persons. IEEE/ASME Trans. Mechatron. 19(3), 808–817 (2014)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Qiang Lu
    • 1
    • 2
  • Guangtao Zhai
    • 1
    • 2
    Email author
  • Xiongkuo Min
    • 1
    • 2
  • Yucheng Zhu
    • 1
    • 2
  1. 1.Shanghai Institute for Advanced Communication and Data ScienceShanghaiChina
  2. 2.Shanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations