Non-collaborative content detecting on video sharing social networks
In this work we are concerned with detecting non-collaborative videos in video sharing social networks. Specifically, we investigate how much visual content-based analysis can aid in detecting ballot stuffing and spam videos in threads of video responses. That is a very challenging task, because of the high-level semantic concepts involved; of the assorted nature of social networks, preventing the use of constrained a priori information; and, which is paramount, of the context-dependent nature of non-collaborative videos. Content filtering for social networks is an increasingly demanded task: due to their popularity, the number of abuses also tends to increase, annoying the user and disrupting their services. We propose two approaches, each one better adapted to a specific non-collaborative action: ballot stuffing, which tries to inflate the popularity of a given video by giving “fake” responses to it, and spamming, which tries to insert a non-related video as a response in popular videos. We endorse the use of low-level features combined into higher-level features representation, like bag-of-visual-features and latent semantic analysis. Our experiments show the feasibility of the proposed approaches.
KeywordsContent filtering Bags of visual features Latent semantic analysis Video social networks
The authors are thankful to the Brazilian agencies CNPq, CAPES, FAPEMIG and FAPESP, for the financial support.
- 1.Avila S, Luz A Jr., Araújo A (2008) VSUMM: a simple and efficient approach for automatic video summarization. In: International Conference on Systems, Signals and Image Processing (IWSSIP’08). pp. 449–452.Google Scholar
- 2.Benevenuto F, Rodrigues T, Almeida V, Almeida J, Gonçalves M (2009) Detecting spammers and content promoters in online video social networks. In: International ACM SIGIR conference on research and development in information retrieval. pp. 620–627Google Scholar
- 4.Caicedo JC, Moreno J, Niño E, Gonzalez F (2010) Combining visual features and text data for medical image retrieval using latent semantic kernels. In: International Conference on Multimedia Information Retrieval (MIR’10). pp. 359–366Google Scholar
- 8.Deselaers T, Pimenidis L, Ney H (2008) Bag-of-visual-words models for adult image classification and filtering. In: International Conference on Pattern Recognition (ICPR’08). pp. 1–4Google Scholar
- 11.Jiang Y-G, Ngo C-W, Yang J (2007) Towards optimal bag-of-features for object categorization and semantic video retrieval, In: 6th ACM International Conference on Image and Video Retrieval (CIVR’07). pp. 494–501Google Scholar
- 13.Langbehn H, Ricci S, Gonçalves M, Almeida J, Pappa G, Benevenuto F (2010) A multi-view approach for detecting non-cooperative users in online video sharing systems. J Inf Data Manag 1(3):313–328Google Scholar
- 14.Lee C-H, Chiang K-C (2010) Latent semantic analysis for classifying scene images. In: International MultiConference of Engineers and Computer Scientists (IMECS 2010), v. II. pp. 1467–1470Google Scholar
- 17.Sivic J, Zisserman A (2003) Video Google: a text retrieval approach to object matching in videos. In: IEEE International Conference on Computer Vision (ICCV’03). pp. 1470–1477Google Scholar
- 18.Valle E, Cord M (2009) Advanced techniques in CBIR: local descriptors, visual dictionaries and bags of features. In: XXII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI’09), Tutorials. pp. 72–78Google Scholar
- 19.Yanai K, Barnard K (2010) Region-based automatic web image selection. In: International Conference on Multimedia Information Retrieval (MIR’10). pp. 305–312Google Scholar
- 20.Yang J, Jiang Y-G, Hauptmann A, Ngo C-W (2007) Evaluating bag-of-visual-words representations in scene classification. In: International Workshop on Multimedia Information Retrieval (MIR’07). pp. 197–206Google Scholar