Abstract
Social media made it convenient for users to express, communicate, discuss, and exchange their opinions on various issues in recent years. For example, Twitter, YouTube, Facebook, and News portals allow users to express themselves through comments. However, such platforms are being misused in the name of freedom of speech. Numerous improper messages towards specific persons or communities can be found in them that use abusive, vulgar, hostile, or harsh words. Moreover, bots are also involved in exchanging such messages nowadays. As a result, user experiences are sometimes ruined on social media. Therefore, automatic identification and filtering of such offensive messages is a significant issue for improving user experience. This paper proposes a heterogeneous ensemble-based machine learning (ML) model powered by artificial intelligence (AI) that can classify messages into Threat, Obscenity, Insult, Identity Hate, Toxic, and Severe Toxic categories. The experimental evaluation of the proposed model on a standard dataset demonstrates the accuracy and adaptability of the proposed model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Nakov, P., Nayak, V., Dent, K., Bhatawdekar, A., Sarwar, S.M., Hardalov, M., Zlatkova, Y.D.D., Bouchard, G., Augenstein, I.: Detecting abusive language on online platforms: a critical analysis. In: CoRR, pp. 1–9 (2021)
Yenala, H., Jhanwar, A., Chinnakotla, M.K., Goyal, J.: Deep learning for detecting inappropriate content in text. Int. J. Data Sci. Anal. 6, 273–286 (2018)
Founta, A.-M., Djouvas, C., Chatzakou, D., Leontiadis, I., Blackburn, J., Stringhini, G., Vakali, A., Sirivianos, M., Kourtellis, N.: Large scale crowdsourcing and characterization of Twitter abusive behavior. In: Proceedings of the Twelfth International AAAI Conference on Web and Social Media, pp. 1–11 (2018)
Modha, S., Majumder, P., Mandl, T., Mandalia, C.: Detecting and visualizing hate speech in social media: a cyber Watchdog for surveillance. Exp. Syst. Appl. 13(1), 591–603 (2020)
Al-Ajlan, M.A., Ykhlef, M.: Deep learning algorithm for cyberbullying detection. Int. J. Adv. Comput. Sci. Appl. 9(9), 199–205 (2018)
Basak, R., Sural, S., Ganguly, N., Ghosh, S.K.: Online public shaming on Twitter: detection, analysis, and mitigation. IEEE Trans. Comput. Soc. Syst. 6(2), 208–220 (2019)
Salawu, S., He, Y., Lumsden, J.: Approaches to automated detection of cyberbullying: a survey. IEEE Trans. Aff. Comput. 11(1), 3–24 (2020)
Fortunatusa, M., Anthonya, P., Charters, S.: Combining textual features to detect cyberbullying in social media posts. Proc. Comput. Sci. 176, 612–621 (2020)
Shah, F., Anwar, A., AlSalman, H., Hussain, S., Al-Hadhrami, S.: Artificial intelligence as a service for immoral content detection and eradication. Sci. Program. 1–9 (2022)
Barrientos, G.M., Alaiz-Rodríguez, R., González-Castro, V., Parnell, A.C.: Machine learning techniques for the detection of inappropriate erotic content in text. Int. J. Comput. Intell. Syst. (2020)
Teli, S.H., Kiran, V.: A survey on deep learning for the detection of inappropriate content present in text. Int. J. Eng. Appl. Sci. Technol. 6, 354–360 (2021)
Yousaf, K., Nawaz, T.: A deep learning-based approach for inappropriate content detection and classification of youtube videos. IEEE Access 10, 16283–16298 (2022)
Djuric, N., Zhou, J., Morris, R., Grbovic, M., Radosavljevic, V., Bhamidipati, N.: Hate speech detection with comment embeddings. In: Proceedings of the 24th International Conference on World Wide Web, pp. 29–30 (2015)
Jigsaw toxic comment classification challenge. https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/data
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Ranjan, A., Pintu, Kumar, V., Singh, M.P. (2023). Artificial Intelligence-Based Model for Detecting Inappropriate Content on the Fly. In: Bhateja, V., Yang, XS., Lin, J.CW., Das, R. (eds) Evolution in Computational Intelligence. FICTA 2022. Smart Innovation, Systems and Technologies, vol 326. Springer, Singapore. https://doi.org/10.1007/978-981-19-7513-4_27
Download citation
DOI: https://doi.org/10.1007/978-981-19-7513-4_27
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7512-7
Online ISBN: 978-981-19-7513-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)