Efficient and effective spam filtering and re-ranking for large web datasets

Abstract

The TREC 2009 web ad hoc and relevance feedback tasks used a new document collection, the ClueWeb09 dataset, which was crawled from the general web in early 2009. This dataset contains 1 billion web pages, a substantial fraction of which are spam—pages designed to deceive search engines so as to deliver an unwanted payload. We examine the effect of spam on the results of the TREC 2009 web ad hoc and relevance feedback tasks, which used the ClueWeb09 dataset. We show that a simple content-based classifier with minimal training is efficient enough to rank the “spamminess” of every page in the dataset using a standard personal computer in 48 hours, and effective enough to yield significant and substantive improvements in the fixed-cutoff precision (estP10) as well as rank measures (estR-Precision, StatMAP, MAP) of nearly all submitted runs. Moreover, using a set of “honeypot” queries the labeling of training data may be reduced to an entirely automatic process. The results of classical information retrieval methods are particularly enhanced by filtering—from among the worst to among the best.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Notes

  1. 1.

    http://trec.nist.gov

  2. 2.

    http://boston.lti.cs.cmu.edu/Data/clueweb09

  3. 3.

    http://webspam.lip6.fr/wiki/pmwiki.php

  4. 4.

    http://barcelona.research.yahoo.net/webspam/datasets

  5. 5.

    http://trec-legal.umiacs.umd.edu

  6. 6.

    http://www.seomoz.org/popular-searches/index/2008-mm-dd

  7. 7.

    http://rdf.dmoz.org

  8. 8.

    Figure 2 is Copyright © 2010 Gordon V. Cormack. This code is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

  9. 9.

    http://durum0.uwaterloo.ca/clueweb09spam

References

  1. Becchetti, L., Castillo, C., Donato, D., Baeza-Yates, R., & Leonardi, S. (2008). Link analysis for web spam detection. ACM Transactions on the Web, 2(1), 1–42.

    Article  Google Scholar 

  2. Büttcher, S., Clarke, C. L. A., & Soboroff, I. (2006). The TREC 2006 terabyte track. In Proceedings of the 15th text retrieval conference, Gaithersburg, Maryland.

  3. Carterette, B., Pavlu, V., Kanoulas, E., Aslam, J. A., & Allan, J. (2008). Evaluation over thousands of queries. In Proceedings of the 31st annual international ACM SIGIR conference on research and development in information retrieval (pp. 651–658), Singapore.

  4. Chandar, P., Kailasam, A., Muppaneni, D., Lekha, T., & Carterette, B. (2009). Ad hoc and diversity retrieval at the University of Delaware. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  5. Clarke, C. L. A., Craswell, N., & Soboroff, I. (2009). Overview of the TREC 2009 web track. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  6. Clarke, C. L. A., Craswell, N., Soboroff, I., Cormack, G. V. (2010). Overview of the TREC 2010 web track. In Proceedings of the 19th text retrieval conference, Gaithersburg, Maryland.

  7. Cormack, G. (2007). Content-based web spam detection. In Proceedings of the 3rd international workshop on adversarial information retrieval on the web (AIRWeb).

  8. Cormack, G. V., Lynam, T. R. (2005). TREC 2005 spam track overview. In Proceedings of the 14th text retrieval conference, Gaithersburg, Maryland.

  9. Cormack, G. V., & Mojdeh, M. (2009). Machine learning for information retrieval: TREC 2009 web, relevance feedback and legal tracks. In: Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  10. Cormack, G. V. (2007). University of Waterloo participation in the TREC 2007 spam track. In Proceedings of the 16th text retrieval conference, Gaithersburg, Maryland.

  11. Dou, Z., Cheny, K., Song, R., Ma, Y., Shi, S., & Wen, J.-R. (2009). Microsoft research Asia at the web track of TREC 2009. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  12. Goodman, J., & Yih, W. T. (2006). Online discriminative spam filter training. In Proceedings of the 3rd conference on email and anti-spam (CEAS).

  13. Guan, F., Yu, X., Peng, Z., Xu, H., Liu, Y., Song, L., & Cheng, X. (2009). ICTNET at web track 2009 ad-hoc task. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  14. Gyöngyi, Z., Garcia-Molina, H. (2005). Spam: It’s not just for inboxes anymore. IEEE Computer, 38(10):28–34.

    Google Scholar 

  15. Hauff, C., & Hiemstra, D. (2009). University of Twente@TREC 2009: Indexing half a billion web pages. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  16. Hawking, D., & Robertson, S. (2003). On collection size and retrieval effectiveness. Information Retrieval, 6(1):99–105.

    Article  Google Scholar 

  17. He, J., Balog, K., Hofmann, K., Meij, E., de Rijke, M., Tsagkias, M., & Weerkamp, W. (2009). Heuristic ranking and diversification of web documents. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  18. Jones, T., Hawking, D., & Sankaranarayana, R. (2007). A framework for measuring the impact of web spam. In Proceedings of the 12th Australasian document computing symposium (ADCS), Melbourne, Australia.

  19. Jones, T., Sankaranarayana, R., Hawking, D., & Craswell, N. (2009). Nullification test collections for web spam and SEO. In Proceedings of the 5th international workshop on adversarial information retrieval on the web (AIRWeb) (pp. 53–60), Madrid, Spain.

  20. Kaptein, R., Koolen, M., & Kamps, J. (2009). Result diversity and entity ranking experiments: Anchors, links, text and Wikipedia. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  21. Lin, J., Metzler, D., Elsayed, T., & Wang, L. (2009). Of ivory and smurfs: Loxodontan MapReduce experiments for web search. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  22. Lynam, T. R., & Cormack, G. V. (2006). On-line spam filter fusion. In Proceedings of the 29th annual international ACM SIGIR conference on research and development in information retrieval (pp. 123–130), Seattle, Washington.

  23. Macdonald, C., Ounis, I., & Soboroff, I. (2007). Overview of the TREC 2007 blog track. In Proceedings of the 16th text retrieval conference, Gaithersburg, Maryland.

  24. Macdonald, C., Ounis, I., & Soboroff, I. (2009). Is spam an issue for opinionated blog post search? In Proceedings of the 32nd international ACM SIGIR conference on research and development in information retrieval (pp. 710–711), Boston.

  25. McCreadie, R., Macdonald, C., Ounis, I., Peng, J., & Santos, R. L. T. (2009). University of Glasgow at TREC 2009: Experiments with Terrier. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  26. Richardson, M., Prakash, A., & Brill, E. (2006). Beyond PageRank: Machine learning for static ranking. In Proceedings of the 15th international world wide web conference (pp. 707–715). Edinburgh, Scotland

  27. Sakai, T., & Kando, N. (2008). On information retrieval metrics designed for evaluation with incomplete relevance assessments. Information Retrieval, 11(5), 447–470.

    Article  Google Scholar 

  28. Smucker, M. D., Clarke, C. L. A., & Cormack, G. V. (2009). Experiments with ClueWeb09: Relevance feedback and web tracks. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.

  29. Strohman, T., Metzler, D., Turtle, H., & Croft W. B. (2005). Indri: A language-model based search engine for complex queries (extended version). Technical Report IR-407, CIIR, CS Dept., U. of Mass. Amherst.

  30. Tomlinson, S., Oard, D. W., Baron, J. R., & Thompson, P. (2007). Overview of the TREC 2007 legal track. In Proceedings of the 16th text retrieval conference, Gaithersburg, Maryland.

Download references

Acknowledgments

The authors thank Ellen Voorhees and Ian Soboroff at the National Institute of Standards and Technology (U.S.A) for providing access to the TREC data. Invaluable feedback on a draft was provided by Stephen Tomlinson, Ian Soboroff and Ellen Voorhees. This research was supported by grants from the Natural Sciences and Engineering Research Council (Canada) and from Amazon.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Charles L. A. Clarke.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Cormack, G.V., Smucker, M.D. & Clarke, C.L.A. Efficient and effective spam filtering and re-ranking for large web datasets. Inf Retrieval 14, 441–465 (2011). https://doi.org/10.1007/s10791-011-9162-z

Download citation

Keywords

  • Web search
  • Spam
  • Web spam
  • Evaluation
  • TREC