Advertisement

What to Read Next? Challenges and Preliminary Results in Selecting Representative Documents

  • Tilman Beck
  • Falk Böschen
  • Ansgar Scherp
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 903)

Abstract

The vast amount of scientific literature poses a challenge when one is trying to understand a previously unknown topic. Selecting a representative subset of documents that covers most of the desired content can solve this challenge by presenting the user a small subset of documents. We build on existing research on representative subset extraction and apply it in an information retrieval setting. Our document selection process consists of three steps: computation of the document representations, clustering, and selection of documents. We implement and compare two different document representations, two different clustering algorithms, and three different selection methods using a coverage and a redundancy metric. We execute our 36 experiments on two datasets, with 10 sample queries each, from different domains. The results show that there is no clear favorite and that we need to ask the question whether coverage and redundancy are sufficient for evaluating representative subsets.

Keywords

Representative document selection Document clustering 

Notes

Acknowledgment

This research was co-financed by the EU H2020 project MOVING (http://www.moving-project.eu/) under contract no 693092 and the EU project DigitalChampions_SH.

References

  1. 1.
    Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: Van den Bussche, J., Vianu, V. (eds.) ICDT 2001. LNCS, vol. 1973, pp. 420–434. Springer, Heidelberg (2001).  https://doi.org/10.1007/3-540-44503-X_27CrossRefGoogle Scholar
  2. 2.
    Agrawal, R., Gollapudi, S., Halverson, A., Ieong, S.: Diversifying search results. In: Baeza-Yates, R.A., Boldi, P., Ribeiro-Neto, B.A., Cambazoglu, B.B. (eds.) Proceedings of the Second International Conference on Web Search and Web Data Mining, WSDM 2009, Barcelona, Spain, 9–11 February 2009, pp. 5–14. ACM (2009).  https://doi.org/10.1145/1498759.1498766
  3. 3.
    Arampatzis, A., Kamps, J., Robertson, S.: Where to stop reading a ranked list?: threshold optimization using truncated score distributions. In: Allan, J., Aslam, J.A., Sanderson, M., Zhai, C., Zobel, J. (eds.) Proceedings of the 32nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2009, Boston, MA, USA, 19–23 July 2009, pp. 524–531. ACM (2009).  https://doi.org/10.1145/1571941.1572031
  4. 4.
    Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. In: Dietterich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems 14 (Neural Information Processing Systems: Natural and Synthetic, NIPS 2001, Vancouver, British Columbia, Canada, 3–8 December 2001), pp. 601–608. MIT Press (2001). http://papers.nips.cc/paper/2070-latent-dirichlet-allocation
  5. 5.
    Endo, Y., Miyamoto, S.: Spherical k-means++ clustering. In: Torra, V., Narukawa, Y. (eds.) MDAI 2015. LNCS (LNAI), vol. 9321, pp. 103–114. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-23240-9_9CrossRefGoogle Scholar
  6. 6.
    He, J., Meij, E., de Rijke, M.: Result diversification based on query-specific cluster ranking. JASIST 62(3), 550–571 (2011).  https://doi.org/10.1002/asi.21468CrossRefGoogle Scholar
  7. 7.
    Jardine, J.G.: Automatically generating reading lists. Ph.D. thesis, University of Cambridge, UK (2014). http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.648722
  8. 8.
    Jardine, J.G., Teufel, S.: Topical PageRank: a model of scientific expertise for bibliographic search. In: Bouma, G., Parmentier, Y. (eds.) Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL 2014, Gothenburg, Sweden, 26–30 April 2014, pp. 501–510. The Association for Computer Linguistics (2014). http://aclweb.org/anthology/E/E14/E14-1053.pdf
  9. 9.
    Le, Q.V., Mikolov, T.: Distributed representations of sentences and documents. In: Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21–26 June 2014. JMLR Workshop and Conference Proceedings, vol. 32, pp. 1188–1196. JMLR.org (2014). http://jmlr.org/proceedings/papers/v32/le14.html
  10. 10.
    Lin, J.: Divergence measures based on the Shannon entropy. IEEE Trans. Inf. Theory 37(1), 145–151 (1991).  https://doi.org/10.1109/18.61115MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Lloyd, S.P.: Least squares quantization in PCM. IEEE Trans. Inf. Theory 28(2), 129–136 (1982).  https://doi.org/10.1109/TIT.1982.1056489MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Ma, B., Wei, Q., Chen, G.: A combined measure for representative information retrieval in enterprise information systems. J. Enterp. Inf. Manag. 24(4), 310–321 (2011).  https://doi.org/10.1108/17410391111148567CrossRefGoogle Scholar
  13. 13.
    Naveen, G.K.R., Nedungadi, P.: Query-based multi-document summarization by clustering of documents. In: Proceedings of the 2014 International Conference on Interdisciplinary Advances in Applied Computing, ICONIAAC 2014, pp. 58:1–58:8. ACM, New York (2014).  https://doi.org/10.1145/2660859.2660972
  14. 14.
    Porter, M.F.: An algorithm for suffix stripping. Program 14(3), 130–137 (1980).  https://doi.org/10.1108/eb046814CrossRefGoogle Scholar
  15. 15.
    Radev, D.R., Joseph, M.T., Gibson, B., Muthukrishnan, P.: A bibliometric and network analysis of the field of computational linguistics. J. Am. Soc. Inf. Sci. Technol. (2009)Google Scholar
  16. 16.
    Radev, D.R., Muthukrishnan, P., Qazvinian, V.: The ACL anthology network corpus. In: Proceedings, ACL Workshop on Natural Language Processing and Information Retrieval for Digital Libraries, Singapore (2009)Google Scholar
  17. 17.
    Radev, D.R., Muthukrishnan, P., Qazvinian, V., Abu-Jbara, A.: The ACL anthology network corpus. Lang. Res. Eval. 47, 1–26 (2013).  https://doi.org/10.1007/s10579-012-9211-2CrossRefGoogle Scholar
  18. 18.
    Řehůřek, R., Sojka, P.: Software framework for topic modelling with large corpora. In: Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pp. 45–50. ELRA, Valletta, May 2010. http://is.muni.cz/publication/884893/en
  19. 19.
    Whissell, J.S., Clarke, C.L.A.: Improving document clustering using Okapi BM25 feature weighting. Inf. Retr. 14(5), 466–487 (2011).  https://doi.org/10.1007/s10791-011-9163-yCrossRefGoogle Scholar
  20. 20.
    Zhang, B., Yin, X., Zhou, F., Jin, J.: Building your own reading list anytime via embedding relevance, quality, timeliness and diversity. In: Kando, N., Sakai, T., Joho, H., Li, H., de Vries, A.P., White, R.W. (eds.) Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, Shinjuku, Tokyo, Japan, 7–11 August 2017, pp. 1109–1112. ACM (2017).  https://doi.org/10.1145/3077136.3080734
  21. 21.
    Zhang, J., Liu, G., Ren, M.: Finding a representative subset from large-scale documents. J. Informetr. 10(3), 762–775 (2016).  https://doi.org/10.1016/j.joi.2016.05.003CrossRefGoogle Scholar
  22. 22.
    Zhang, J., Ren, M., Xiao, X., Zhang, J.: Providing consumers with a representative subset from online reviews. Online Inf. Rev. 41(6), 877–899 (2017).  https://doi.org/10.1108/OIR-05-2016-0125CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Department of Computer ScienceKiel UniversityKielGermany
  2. 2.Computing Science and MathematicsUniversity of StirlingStirlingScotland, UK

Personalised recommendations