Scene word recognition from pieces to whole
- 24 Downloads
Convolutional neural networks (CNNs) have had great success with regard to the object classification problem. For character classification, we found that training and testing using accurately segmented character regions with CNNs resulted in higher accuracy than when roughly segmented regions were used. Therefore, we expect to extract complete character regions from scene images. Text in natural scene images has an obvious contrast with its attachments. Many methods attempt to extract characters through different segmentation techniques. However, for blurred, occluded, and complex background cases, those methods may result in adjoined or over segmented characters. In this paper, we propose a scene word recognition model that integrates words from small pieces to entire after-cluster-based segmentation. The segmented connected components are classified as four types: background, individual character proposals, adjoined characters, and stroke proposals. Individual character proposals are directly inputted to a CNN that is trained using accurately segmented character images. The sliding window strategy is applied to adjoined character regions. Stroke proposals are considered as fragments of entire characters whose locations are estimated by a stroke spatial distribution system. Then, the estimated characters from adjoined characters and stroke proposals are classified by a CNN that is trained on roughly segmented character images. Finally, a lexicon-driven integration method is performed to obtain the final word recognition results. Compared to other word recognition methods, our method achieves a comparable performance on Street View Text and the ICDAR 2003 and ICDAR 2013 benchmark databases. Moreover, our method can deal with recognizing text images of occlusion and improperly segmented text images.
Keywordstext recognition convolutional neural networks cluster-based segmentation character integration
Unable to display preview. Download preview PDF.
This work was supported in part by the National Natural Science Foundation of China (Grant No. 61703316), and in part by the Human Interface Lab of Kyushu University, Japan.
- 4.Goel V, Mishra A, Alahari K, Jawahar C V. Whole is greater than sum of parts: Recognizing scene text words. In: Proceedings of IEEE International Conference on Document Analysis and Recognition. 2013, 398–402Google Scholar
- 6.Wang T, Wu D J, Coates A, Ng A Y. End-to-end text recognition with convolutional neural networks. In: Proceedings of IEEE International Conference on Pattern Recognition. 2012, 3304–3308Google Scholar
- 7.Mishra A, Alahari K, Jawahar C V. Top-down and bottom-up cues for scene text recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2012, 2687–2694Google Scholar
- 8.He P, Huang W, Qiao Y, Loy C C, Tang X. Reading scene text in deep convolutional sequences. In: Proceedings of AAAI Conference on Artificial Intelligence. 2016Google Scholar
- 9.Shi B G, Bai X, Yao C. An end-to-end trainable neural network for image-based sequence recognition and its application to scene text recognition. 2015, arXiv preprint arXiv:1507.05717Google Scholar
- 10.Alsharif O, Pineau J. End-to-end text recognition with hybrid HMM maxout models. 2013, arXiv preprint arXiv:1310.1811Google Scholar
- 11.Yao C, Bai X, Shi B, Liu W Y. Strokelets: a learned multi-scale representation for scene text recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2014, 4042–4049Google Scholar
- 12.Zitnick C L, Dollár P. Edge boxes: locating object proposals from edges. In: Proceedings of European Conference on Computer Vision. 2014, 391–405Google Scholar
- 14.Sarawagi S, Cohen W W. Semi-Markov conditional random fields for information extraction. In: Proceedings of International Conference on Neural Information Processing Systems. 2004, 1185–1192Google Scholar
- 17.Dalal N, Triggs B. Histograms of oriented gradients for human detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2005, 886–893Google Scholar
- 18.McCann S, Lowe D G. Spatially local coding for object recognition. In: Proceedings of Asian Conference on Computer Vision. 2012, 204–217Google Scholar
- 19.Neubeck A, Van Gool L. Efficient non-maximum suppression. In: Proceedings of IEEE International Conference on Pattern Recognition. 2006, 850–855Google Scholar
- 20.de Campos T E, Babu B R, Varma M. Character Recognition in Natural Images. In: Proceedings of International Conference on Computer Vision Theory and Applications. 2009, 273–280Google Scholar
- 21.Lucas S M, Panaretos A, Sosa L, Tang A, Wong S, Young R. ICDAR 2003 robust reading competitions. In: Proceedings of IEEE International Conference on Document Analysis and Recognition. 2003Google Scholar
- 22.Wang K, Babenko B, Belongie S. End-to-end scene text recognition. In: Proceedings of International Conference on Computer Vision. 2011, 1457–1464Google Scholar
- 23.Wang K, Belongie S. Word spotting in the wild. In: Proceedings of European Conference on Computer Vision. 2010, 591–604Google Scholar
- 26.Mishra A, Alahari K, Jawahar C V. Scene text recognition using higher order language priors. In: Proceedings of British Machine Vision Conference. 2012Google Scholar