Skip to main content

Artificial Intelligence-Based Model for Detecting Inappropriate Content on the Fly

  • Conference paper
  • First Online:
Evolution in Computational Intelligence (FICTA 2022)

Part of the book series: Smart Innovation, Systems and Technologies ((SIST,volume 326))

Abstract

Social media made it convenient for users to express, communicate, discuss, and exchange their opinions on various issues in recent years. For example, Twitter, YouTube, Facebook, and News portals allow users to express themselves through comments. However, such platforms are being misused in the name of freedom of speech. Numerous improper messages towards specific persons or communities can be found in them that use abusive, vulgar, hostile, or harsh words. Moreover, bots are also involved in exchanging such messages nowadays. As a result, user experiences are sometimes ruined on social media. Therefore, automatic identification and filtering of such offensive messages is a significant issue for improving user experience. This paper proposes a heterogeneous ensemble-based machine learning (ML) model powered by artificial intelligence (AI) that can classify messages into Threat, Obscenity, Insult, Identity Hate, Toxic, and Severe Toxic categories. The experimental evaluation of the proposed model on a standard dataset demonstrates the accuracy and adaptability of the proposed model.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 229.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 299.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 299.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Nakov, P., Nayak, V., Dent, K., Bhatawdekar, A., Sarwar, S.M., Hardalov, M., Zlatkova, Y.D.D., Bouchard, G., Augenstein, I.: Detecting abusive language on online platforms: a critical analysis. In: CoRR, pp. 1–9 (2021)

    Google Scholar 

  2. Yenala, H., Jhanwar, A., Chinnakotla, M.K., Goyal, J.: Deep learning for detecting inappropriate content in text. Int. J. Data Sci. Anal. 6, 273–286 (2018)

    Article  Google Scholar 

  3. Founta, A.-M., Djouvas, C., Chatzakou, D., Leontiadis, I., Blackburn, J., Stringhini, G., Vakali, A., Sirivianos, M., Kourtellis, N.: Large scale crowdsourcing and characterization of Twitter abusive behavior. In: Proceedings of the Twelfth International AAAI Conference on Web and Social Media, pp. 1–11 (2018)

    Google Scholar 

  4. Modha, S., Majumder, P., Mandl, T., Mandalia, C.: Detecting and visualizing hate speech in social media: a cyber Watchdog for surveillance. Exp. Syst. Appl. 13(1), 591–603 (2020)

    Google Scholar 

  5. Al-Ajlan, M.A., Ykhlef, M.: Deep learning algorithm for cyberbullying detection. Int. J. Adv. Comput. Sci. Appl. 9(9), 199–205 (2018)

    Google Scholar 

  6. Basak, R., Sural, S., Ganguly, N., Ghosh, S.K.: Online public shaming on Twitter: detection, analysis, and mitigation. IEEE Trans. Comput. Soc. Syst. 6(2), 208–220 (2019)

    Article  Google Scholar 

  7. Salawu, S., He, Y., Lumsden, J.: Approaches to automated detection of cyberbullying: a survey. IEEE Trans. Aff. Comput. 11(1), 3–24 (2020)

    Article  Google Scholar 

  8. Fortunatusa, M., Anthonya, P., Charters, S.: Combining textual features to detect cyberbullying in social media posts. Proc. Comput. Sci. 176, 612–621 (2020)

    Article  Google Scholar 

  9. Shah, F., Anwar, A., AlSalman, H., Hussain, S., Al-Hadhrami, S.: Artificial intelligence as a service for immoral content detection and eradication. Sci. Program. 1–9 (2022)

    Google Scholar 

  10. Barrientos, G.M., Alaiz-Rodríguez, R., González-Castro, V., Parnell, A.C.: Machine learning techniques for the detection of inappropriate erotic content in text. Int. J. Comput. Intell. Syst. (2020)

    Google Scholar 

  11. Teli, S.H., Kiran, V.: A survey on deep learning for the detection of inappropriate content present in text. Int. J. Eng. Appl. Sci. Technol. 6, 354–360 (2021)

    Google Scholar 

  12. Yousaf, K., Nawaz, T.: A deep learning-based approach for inappropriate content detection and classification of youtube videos. IEEE Access 10, 16283–16298 (2022)

    Article  Google Scholar 

  13. Djuric, N., Zhou, J., Morris, R., Grbovic, M., Radosavljevic, V., Bhamidipati, N.: Hate speech detection with comment embeddings. In: Proceedings of the 24th International Conference on World Wide Web, pp. 29–30 (2015)

    Google Scholar 

  14. Jigsaw toxic comment classification challenge. https://www.kaggle.com/competitions/jigsaw-toxic-comment-classification-challenge/data

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mahendra Pratap Singh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ranjan, A., Pintu, Kumar, V., Singh, M.P. (2023). Artificial Intelligence-Based Model for Detecting Inappropriate Content on the Fly. In: Bhateja, V., Yang, XS., Lin, J.CW., Das, R. (eds) Evolution in Computational Intelligence. FICTA 2022. Smart Innovation, Systems and Technologies, vol 326. Springer, Singapore. https://doi.org/10.1007/978-981-19-7513-4_27

Download citation

Publish with us

Policies and ethics