DeepBIBX: Deep Learning for Image Based Bibliographic Data Extraction
- 5.1k Downloads
Extraction of structured bibliographic data from document images of non-native-digital academic content is a challenging problem that finds its application in the automation of cataloging systems in libraries and reference linking domain. The existing approaches discard the visual cues and focus on converting the document image to text and further identifying citation strings using trained segmentation models. Apart from the large training data, which these existing methods require, they are also language dependent. This paper presents a novel approach (DeepBIBX) which targets this problem from a computer vision perspective and uses deep learning to semantically segment the individual citation strings in a document image. DeepBIBX is based on deep Fully Convolutional Networks and uses transfer learning to extract bibliographic references from document images. Unlike existing approaches which use textual content to semantically segment bibliographic references, DeepBIBX utilizes image based contextual information, which makes it applicable to documents of any language. To gauge the performance of the presented approach, a dataset consisting of 286 document images containing 5090 bibliographic references is collected. Evaluation results reveals that the DeepBIBX outperforms state-of-the-art method (ParsCit, 71.7%) for bibliographic references extraction and achieved an accuracy of 84.9% in comparison to 71.7%. Furthermore, in terms of pixel classification task, DeepBIBX achieved a precision and a recall rate of 96.2%, 94.4% respectively.
KeywordsDeep learning Machine learning Bibliographic data Reference linking
This work was partially supported by the DFG under contract DE 420/18-1 and by the Swiss National Science Foundation under grant number \(407540\_167320\).
- 1.Councill, I.G., Giles, C.L., Kan, M.Y.: Parscit: an open-source CRF reference string parsing package. In: LREC 2008 (2008)Google Scholar
- 3.He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
- 4.Zhang, X., Li, Z., Loy, C.C., Lin, D.: Polynet: a pursuit of structural diversity in very deep networks. arXiv preprint arXiv:1611.05725 (2016)
- 5.Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, pp. 4278–4284 (2017)Google Scholar
- 6.Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)Google Scholar
- 7.Caruana, R.: Multitask Learning. In: Thrun, S., Pratt, L. (eds.) Learning to Learn. Springer, Boston (1998)Google Scholar
- 8.Everingham, M., Van Gool, L., Williams, C., Winn, J., Zisserman, A.: Pascal visual object classes challenge results 1(6), 7 (2005), www.pascal-network.org
- 9.Johnson, R.K.: Special issue: In google’s broad wake: taking responsibility for shaping the global digital library. ARL: A bimonthly report on research library issues and actions from ARL, CNI, and SPARC, vol. 250. Association of Research Libraries (2007)Google Scholar
- 10.Crossref labs pdfextract, https://www.crossref.org/labs/pdfextract/
- 11.Breuel, T.M.: The ocropus open source OCR system. DRR 6815, 68150 (2008)Google Scholar
- 12.Anystyle.io, https://anystyle.io