Multimodal approach for multimedia injurious contents blocking
- 29 Downloads
Due to the development of IT technology, harmful multimedia contents are spreading out. In addition, obscene and violent contents have a negative impact on children. Therefore, in this paper, we propose a multimodal approach for blocking obscene and violent video contents. Within this approach, there are two modules each detects obsceneness and violence. In the obsceneness module, there is a model that detects obsceneness based on adult and racy score. In the violence module, there are two models for detecting violence: one is the blood detection model using RGB region and the other is motion extraction model for observation that violent actions have larger magnitude and direction change. Through result of these three models, this approach judges whether or not the content is harmful. This can contribute to the blocking obscene and violent contents that are distributed indiscriminately.
KeywordsComputer vision Obsceneness Violence Harmful contents Multimedia
- 2.Clarin C, Dionisio J, Echavez M, Naval PC (2005) Dove: detection of movie violence using motion intensity analysis on skin and blood. Philippine Computing Science Congress 6:150–156Google Scholar
- 10.Korea Press Foundation (2013) Survey of media audience. Korea Press Foundation, SeoulGoogle Scholar
- 14.Senst T, Eiselein V, Kuhn A, Sikora T (2017) Crowd violence detection using global motioncompensated lagrangian features and scale-sensitive video-level representation. IEEE Transactions on Information Forensics and Security 12(12):2945–2956. https://doi.org/10.1109/TIFS.2017.2725820 CrossRefGoogle Scholar
- 15.Zheng H, Liu H, Daoudi M (2004) Blocking objectionable images: adult images and harmful symbols. In Proceedings of the IEEE International Conf. on Multimedia and ExpoGoogle Scholar