Multimedia Tools and Applications

, Volume 70, Issue 2, pp 1049–1067 | Cite as

Non-collaborative content detecting on video sharing social networks

  • Antonio da Luz
  • Eduardo Valle
  • Arnaldo de A. Araújo
Article
  • 118 Downloads

Abstract

In this work we are concerned with detecting non-collaborative videos in video sharing social networks. Specifically, we investigate how much visual content-based analysis can aid in detecting ballot stuffing and spam videos in threads of video responses. That is a very challenging task, because of the high-level semantic concepts involved; of the assorted nature of social networks, preventing the use of constrained a priori information; and, which is paramount, of the context-dependent nature of non-collaborative videos. Content filtering for social networks is an increasingly demanded task: due to their popularity, the number of abuses also tends to increase, annoying the user and disrupting their services. We propose two approaches, each one better adapted to a specific non-collaborative action: ballot stuffing, which tries to inflate the popularity of a given video by giving “fake” responses to it, and spamming, which tries to insert a non-related video as a response in popular videos. We endorse the use of low-level features combined into higher-level features representation, like bag-of-visual-features and latent semantic analysis. Our experiments show the feasibility of the proposed approaches.

Keywords

Content filtering Bags of visual features Latent semantic analysis Video social networks 

References

  1. 1.
    Avila S, Luz A Jr., Araújo A (2008) VSUMM: a simple and efficient approach for automatic video summarization. In: International Conference on Systems, Signals and Image Processing (IWSSIP’08). pp. 449–452.Google Scholar
  2. 2.
    Benevenuto F, Rodrigues T, Almeida V, Almeida J, Gonçalves M (2009) Detecting spammers and content promoters in online video social networks. In: International ACM SIGIR conference on research and development in information retrieval. pp. 620–627Google Scholar
  3. 3.
    Blanzieri E, Bryl A (2008) A survey of learning-based techniques of email spam filtering. Artif Intell Rev 29(1):63–92CrossRefGoogle Scholar
  4. 4.
    Caicedo JC, Moreno J, Niño E, Gonzalez F (2010) Combining visual features and text data for medical image retrieval using latent semantic kernels. In: International Conference on Multimedia Information Retrieval (MIR’10). pp. 359–366Google Scholar
  5. 5.
    Cormack G (2008) Email spam filtering: a systematic review. Found Trends Inf Retr 1(4):335–455CrossRefMathSciNetGoogle Scholar
  6. 6.
    Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20(3):273–297MATHGoogle Scholar
  7. 7.
    Crane R, Sornette D (2008) Robust dynamic classes revealed by measuring the response function of a social system. Proc Natl Acad Sci 105(41):15649–15653CrossRefGoogle Scholar
  8. 8.
    Deselaers T, Pimenidis L, Ney H (2008) Bag-of-visual-words models for adult image classification and filtering. In: International Conference on Pattern Recognition (ICPR’08). pp. 1–4Google Scholar
  9. 9.
    Gerard S, Buckley C (1988) Term-weighting approaches in automatic text retrieval. Inf Process Manag 24(5):513–523CrossRefGoogle Scholar
  10. 10.
    Heymann P, Koutrika G, Garcia-Molina H (2007) Fighting spam on social web sites: a survey of approaches and future challenges. IEEE Internet Comput 11(6):36–45CrossRefGoogle Scholar
  11. 11.
    Jiang Y-G, Ngo C-W, Yang J (2007) Towards optimal bag-of-features for object categorization and semantic video retrieval, In: 6th ACM International Conference on Image and Video Retrieval (CIVR’07). pp. 494–501Google Scholar
  12. 12.
    Landauer T, Foltz P, Laham D (1998) Introduction to latent semantic analysis. Discourse Process 25(2–3):259–284CrossRefGoogle Scholar
  13. 13.
    Langbehn H, Ricci S, Gonçalves M, Almeida J, Pappa G, Benevenuto F (2010) A multi-view approach for detecting non-cooperative users in online video sharing systems. J Inf Data Manag 1(3):313–328Google Scholar
  14. 14.
    Lee C-H, Chiang K-C (2010) Latent semantic analysis for classifying scene images. In: International MultiConference of Engineers and Computer Scientists (IMECS 2010), v. II. pp. 1467–1470Google Scholar
  15. 15.
    Lowe D (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110CrossRefGoogle Scholar
  16. 16.
    Mikolajczyk K, Schmid C (2005) A performance evaluation of local descriptors. IEEE Trans Pattern Anal Mach Intell 27(10):1615–1630CrossRefGoogle Scholar
  17. 17.
    Sivic J, Zisserman A (2003) Video Google: a text retrieval approach to object matching in videos. In: IEEE International Conference on Computer Vision (ICCV’03). pp. 1470–1477Google Scholar
  18. 18.
    Valle E, Cord M (2009) Advanced techniques in CBIR: local descriptors, visual dictionaries and bags of features. In: XXII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI’09), Tutorials. pp. 72–78Google Scholar
  19. 19.
    Yanai K, Barnard K (2010) Region-based automatic web image selection. In: International Conference on Multimedia Information Retrieval (MIR’10). pp. 305–312Google Scholar
  20. 20.
    Yang J, Jiang Y-G, Hauptmann A, Ngo C-W (2007) Evaluating bag-of-visual-words representations in scene classification. In: International Workshop on Multimedia Information Retrieval (MIR’07). pp. 197–206Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2012

Authors and Affiliations

  • Antonio da Luz
    • 1
  • Eduardo Valle
    • 2
  • Arnaldo de A. Araújo
    • 1
  1. 1.NPDI Lab — DCC/UFMGBelo HorizonteBrazil
  2. 2.RECOD Lab — DCA/FEEC/UNICAMPCampinasBrazil

Personalised recommendations