Advertisement

Information Retrieval

, Volume 14, Issue 5, pp 441–465 | Cite as

Efficient and effective spam filtering and re-ranking for large web datasets

  • Gordon V. Cormack
  • Mark D. Smucker
  • Charles L. A. Clarke
Article

Abstract

The TREC 2009 web ad hoc and relevance feedback tasks used a new document collection, the ClueWeb09 dataset, which was crawled from the general web in early 2009. This dataset contains 1 billion web pages, a substantial fraction of which are spam—pages designed to deceive search engines so as to deliver an unwanted payload. We examine the effect of spam on the results of the TREC 2009 web ad hoc and relevance feedback tasks, which used the ClueWeb09 dataset. We show that a simple content-based classifier with minimal training is efficient enough to rank the “spamminess” of every page in the dataset using a standard personal computer in 48 hours, and effective enough to yield significant and substantive improvements in the fixed-cutoff precision (estP10) as well as rank measures (estR-Precision, StatMAP, MAP) of nearly all submitted runs. Moreover, using a set of “honeypot” queries the labeling of training data may be reduced to an entirely automatic process. The results of classical information retrieval methods are particularly enhanced by filtering—from among the worst to among the best.

Keywords

Web search Spam Web spam Evaluation TREC 

Notes

Acknowledgments

The authors thank Ellen Voorhees and Ian Soboroff at the National Institute of Standards and Technology (U.S.A) for providing access to the TREC data. Invaluable feedback on a draft was provided by Stephen Tomlinson, Ian Soboroff and Ellen Voorhees. This research was supported by grants from the Natural Sciences and Engineering Research Council (Canada) and from Amazon.

References

  1. Becchetti, L., Castillo, C., Donato, D., Baeza-Yates, R., & Leonardi, S. (2008). Link analysis for web spam detection. ACM Transactions on the Web, 2(1), 1–42.CrossRefGoogle Scholar
  2. Büttcher, S., Clarke, C. L. A., & Soboroff, I. (2006). The TREC 2006 terabyte track. In Proceedings of the 15th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  3. Carterette, B., Pavlu, V., Kanoulas, E., Aslam, J. A., & Allan, J. (2008). Evaluation over thousands of queries. In Proceedings of the 31st annual international ACM SIGIR conference on research and development in information retrieval (pp. 651–658), Singapore.Google Scholar
  4. Chandar, P., Kailasam, A., Muppaneni, D., Lekha, T., & Carterette, B. (2009). Ad hoc and diversity retrieval at the University of Delaware. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  5. Clarke, C. L. A., Craswell, N., & Soboroff, I. (2009). Overview of the TREC 2009 web track. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  6. Clarke, C. L. A., Craswell, N., Soboroff, I., Cormack, G. V. (2010). Overview of the TREC 2010 web track. In Proceedings of the 19th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  7. Cormack, G. (2007). Content-based web spam detection. In Proceedings of the 3rd international workshop on adversarial information retrieval on the web (AIRWeb).Google Scholar
  8. Cormack, G. V., Lynam, T. R. (2005). TREC 2005 spam track overview. In Proceedings of the 14th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  9. Cormack, G. V., & Mojdeh, M. (2009). Machine learning for information retrieval: TREC 2009 web, relevance feedback and legal tracks. In: Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  10. Cormack, G. V. (2007). University of Waterloo participation in the TREC 2007 spam track. In Proceedings of the 16th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  11. Dou, Z., Cheny, K., Song, R., Ma, Y., Shi, S., & Wen, J.-R. (2009). Microsoft research Asia at the web track of TREC 2009. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  12. Goodman, J., & Yih, W. T. (2006). Online discriminative spam filter training. In Proceedings of the 3rd conference on email and anti-spam (CEAS).Google Scholar
  13. Guan, F., Yu, X., Peng, Z., Xu, H., Liu, Y., Song, L., & Cheng, X. (2009). ICTNET at web track 2009 ad-hoc task. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  14. Gyöngyi, Z., Garcia-Molina, H. (2005). Spam: It’s not just for inboxes anymore. IEEE Computer, 38(10):28–34.Google Scholar
  15. Hauff, C., & Hiemstra, D. (2009). University of Twente@TREC 2009: Indexing half a billion web pages. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  16. Hawking, D., & Robertson, S. (2003). On collection size and retrieval effectiveness. Information Retrieval, 6(1):99–105.CrossRefGoogle Scholar
  17. He, J., Balog, K., Hofmann, K., Meij, E., de Rijke, M., Tsagkias, M., & Weerkamp, W. (2009). Heuristic ranking and diversification of web documents. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  18. Jones, T., Hawking, D., & Sankaranarayana, R. (2007). A framework for measuring the impact of web spam. In Proceedings of the 12th Australasian document computing symposium (ADCS), Melbourne, Australia.Google Scholar
  19. Jones, T., Sankaranarayana, R., Hawking, D., & Craswell, N. (2009). Nullification test collections for web spam and SEO. In Proceedings of the 5th international workshop on adversarial information retrieval on the web (AIRWeb) (pp. 53–60), Madrid, Spain.Google Scholar
  20. Kaptein, R., Koolen, M., & Kamps, J. (2009). Result diversity and entity ranking experiments: Anchors, links, text and Wikipedia. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  21. Lin, J., Metzler, D., Elsayed, T., & Wang, L. (2009). Of ivory and smurfs: Loxodontan MapReduce experiments for web search. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  22. Lynam, T. R., & Cormack, G. V. (2006). On-line spam filter fusion. In Proceedings of the 29th annual international ACM SIGIR conference on research and development in information retrieval (pp. 123–130), Seattle, Washington.Google Scholar
  23. Macdonald, C., Ounis, I., & Soboroff, I. (2007). Overview of the TREC 2007 blog track. In Proceedings of the 16th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  24. Macdonald, C., Ounis, I., & Soboroff, I. (2009). Is spam an issue for opinionated blog post search? In Proceedings of the 32nd international ACM SIGIR conference on research and development in information retrieval (pp. 710–711), Boston.Google Scholar
  25. McCreadie, R., Macdonald, C., Ounis, I., Peng, J., & Santos, R. L. T. (2009). University of Glasgow at TREC 2009: Experiments with Terrier. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  26. Richardson, M., Prakash, A., & Brill, E. (2006). Beyond PageRank: Machine learning for static ranking. In Proceedings of the 15th international world wide web conference (pp. 707–715). Edinburgh, ScotlandGoogle Scholar
  27. Sakai, T., & Kando, N. (2008). On information retrieval metrics designed for evaluation with incomplete relevance assessments. Information Retrieval, 11(5), 447–470.CrossRefGoogle Scholar
  28. Smucker, M. D., Clarke, C. L. A., & Cormack, G. V. (2009). Experiments with ClueWeb09: Relevance feedback and web tracks. In Proceedings of the 18th text retrieval conference, Gaithersburg, Maryland.Google Scholar
  29. Strohman, T., Metzler, D., Turtle, H., & Croft W. B. (2005). Indri: A language-model based search engine for complex queries (extended version). Technical Report IR-407, CIIR, CS Dept., U. of Mass. Amherst.Google Scholar
  30. Tomlinson, S., Oard, D. W., Baron, J. R., & Thompson, P. (2007). Overview of the TREC 2007 legal track. In Proceedings of the 16th text retrieval conference, Gaithersburg, Maryland.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  • Gordon V. Cormack
    • 1
  • Mark D. Smucker
    • 1
  • Charles L. A. Clarke
    • 1
  1. 1.University of WaterlooWaterlooCanada

Personalised recommendations