| Peer-Reviewed

Harmful Content on Social Media Detection Using by NLP

Published in Advances (Volume 4, Issue 2)
Received: 30 April 2023     Accepted: 12 June 2023     Published: 13 July 2023
Views:       Downloads:
Abstract

Twitter, Facebook and Instagram are the popular social media platforms that allow people to access and connect to a world by a social network to express share and publish information. While online connection via media platforms is immensely desirable and come an unavoidable fact of daily life, the underbelly of social networks may be seen in the form of harmful/objectionable material. Fake news, rumors, hate speech, hostility, and bullying are examples of documented harmful material that are of major concern to society. Such damaging content hurts a negative impact on one's mental health and leads to financial losses that are rarely recoverable. Screening and filtering of such information is thus an urgent requirement. In this paper, we summarize some popular SM like Facebook WHATSAPP, LinkedIn etc. We use some notation like UGC, ML, and AI etc. In this review paper, focuses on methods for detecting harmful parts through natural language processing. The next phase looks at how to moderate this material.

Published in Advances (Volume 4, Issue 2)
DOI 10.11648/j.advances.20230402.13
Page(s) 49-59
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2023. Published by Science Publishing Group

Keywords

Social Media (SM) Platforms, Detection and Moderation, Natural Language Processing (NLP), Artificial Intelligence (AI). Hate Speech Detection

References
[1] V. U. Gongane, M. V. Munot, and A. D. Anuse, Detection and moderation of detrimental content on social media platforms: current status and future directions, vol. 12, no. 1. Springer Vienna, 2022. doi: 10.1007/s13278-022-00951-3.
[2] M. S. Ahmed, M. H. Sharif, N. Ihaddadene, and C. Djeraba, “Detection of Abnormal Motions in Multimedia,” Chania ICMI-MIAUCE, vol. 8, no. May 2014, 2008.
[3] J. Ma, W. Gao, Z. Wei, Y. Lu, and K. F. Wong, “Detect rumors using time series of social context information on microblogging websites,” Int. Conf. Inf. Knowl. Manag. Proc., vol. 19-23-Oct-, no. October, pp. 1751–1754, 2015, doi: 10.1145/2806416.2806607.
[4] A. Al-Hassan and H. Al-Dossari, “Detection of Hate Speech in Social Networks: a Survey on Multilingual Corpus,” pp. 83–100, 2019, doi: 10.5121/csit.2019.90208.
[5] G. Xiang, B. Fan, L. Wang, J. Hong, and C. Rose, “Detecting offensive tweets via topical feature discovery over a large scale twitter corpus,” ACM Int. Conf. Proceeding Ser., pp. 1980–1984, 2012, doi: 10.1145/2396761.2398556.
[6] F. Alkomah and X. Ma, “A Literature Review of Textual Hate Speech Detection Methods and Datasets,” Inf., vol. 13, no. 6, pp. 1–22, 2022, doi: 10.3390/info13060273.
[7] J. Robinson et al., “Social media and suicide prevention: A systematic review,” Early Interv. Psychiatry, vol. 10, no. 2, pp. 103–121, 2016, doi: 10.1111/eip.12229.
[8] C. Emma Hilton, “Unveiling self-harm behaviour: what can social media site Twitter tell us about self-harm? A qualitative exploration,” J. Clin. Nurs., vol. 26, no. 11–12, pp. 1690–1704, 2017, doi: 10.1111/jocn.13575.
[9] C. Laorden, B. Sanz, G. Alvarez, and P. G. Bringas, “A threat model approach to threats and vulnerabilities in on-line social networks,” Adv. Intell. Soft Comput., vol. 85, no. October, pp. 135–142, 2010, doi: 10.1007/978-3-642-16626-6_15.
[10] G. E. Hine et al., “Kek, cucks, and god emperor Trump: A measurement study of 4chan’s politically incorrect forum and its effects on the web,” Proc. 11th Int. Conf. Web Soc. Media, ICWSM 2017, pp. 92–101, 2017.
[11] A. T.. Shahjahan and K. U. Chisty, “Social Media Research and its Effect on Our Society,” Soc. Media Res. Its Eff. Our Soc., vol. 8, no. 6, pp. 1–5, 2014.
[12] A. Squicciarini, S. Rajtmajer, Y. Liu, and C. Griffin, “Identification and characterization of cyberbullying dynamics in an online social network,” Proc. 2015 IEEE/ACM Int. Conf. Adv. Soc. Networks Anal. Mining, ASONAM 2015, pp. 280–285, 2015, doi: 10.1145/2808797.2809398.
[13] R. Produced and O. N. Behalf, “Use of ai in online content moderation 2019,” 2019.
[14] Y. Wang et al., “Eann,” pp. 849–857, 2018, doi: 10.1145/3219819.3219903.
[15] S. Shama*, S. W. Akram, K. S. Nandini, P. B. Anjali, and K. D. Manaswi, “Fake Profile Identification in Online Social Networks,” Int. J. Recent Technol. Eng., vol. 8, no. 4, pp. 11190–11194, 2019, doi: 10.35940/ijrte.d9933.118419.
[16] Y. Chen, S. Zhu, Y. Zhou, and H. Xu, “Detecting Offensive Language in Social Media to Protect Adolescents,” Proc. Int. Conf. Privacy, Secur. Risk Trust, p. 71_80., 2012, [Online]. Available: http://www.cse.psu.edu/~sxz16/papers/SocialCom2012.pdf
[17] P. Burnap and M. Williams, “Hate Speech, Machine Classification and Statistical Modelling of Information Flows on Twitter: Interpretation and Communication for Policy Decision Making,” Internet, Policy Polit., pp. 1–18, 2014, [Online]. Available: http://orca.cf.ac.uk/id/eprint/65227%0A
[18] F. Del Vigna, A. Cimino, F. Dell’Orletta, M. Petrocchi, and M. Tesconi, “Hate me, hate me not: Hate speech detection on Facebook,” CEUR Workshop Proc., vol. 1816, pp. 86–95, 2017.
[19] D. Chatzakou, N. Kourtellis, J. Blackburn, E. De Cristofaro, G. Stringhini, and A. Vakali, “Mean birds: Detecting aggression and bullying on Twitter,” WebSci 2017 - Proc. 2017 ACM Web Sci. Conf., pp. 13–22, 2017, doi: 10.1145/3091478.3091487.
[20] J. Cheng, C. Danescu-Niculescu-Mizil, and J. Leskovec, “Antisocial behavior in online discussion communities,” Proc. 9th Int. Conf. Web Soc. Media, ICWSM 2015, pp. 61–70, 2015.
[21] H. Ahmed, I. Traore, and S. Saad, “Detection of Online Fake News Using N-Gram Analysis and Machine Learning Techniques,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10618 LNCS, no. December, pp. 127–138, 2017, doi: 10.1007/978-3-319-69155-8_9.
[22] B. Srinandhini and J. I. Sheeba, “Online social network bullying detection using intelligence techniques,” Procedia Comput. Sci., vol. 45, no. C, pp. 485–492, 2015, doi: 10.1016/j.procs.2015.03.085.
[23] D. Ramalingam and V. Chinnaiah, “Fake profile detection techniques in large-scale online social networks: A comprehensive review,” Comput. Electr. Eng., vol. 65, pp. 165–177, 2018, doi: 10.1016/j.compeleceng.2017.05.020.
[24] S. Singhania, N. Fernandez, and S. Rao, “3HAN: A Deep Neural Network for Fake News Detection,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10635 LNCS, pp. 572–581, 2017, doi: 10.1007/978-3-319-70096-0_59.
[25] O. De Clercq, S. Schulz, B. Desmet, E. Lefever, and V. Hoste, “Normalization of Dutch user-generated content,” Int. Conf. Recent Adv. Nat. Lang. Process. RANLP, pp. 179–188, 2013.
[26] Z. Zhang and L. Luo, “Hate speech detection: A solved problem? The challenging case of long tail on Twitter,” Semant. Web, vol. 10, no. 5, pp. 925–945, 2019, doi: 10.3233/SW-180338.
[27] C. Nobata, J. Tetreault, A. Thomas, Y. Mehdad, and Y. Chang, “Abusive language detection in online user content,” 25th Int. World Wide Web Conf. WWW 2016, pp. 145–153, 2016, doi: 10.1145/2872427.2883062.
[28] C. Van Hee et al., “Automatic detection and prevention of cyberbullying,” Int. Conf. Hum. Soc. Anal. (HUSO 2015), no. c, pp. 13–18, 2015, [Online]. Available: https://biblio.ugent.be/publication/7010768/file/7010781.pdf
[29] J. M. Xu, K. S. Jun, X. Zhu, and A. Bellmore, “Learning from bullying traces in social media,” NAACL HLT 2012 - 2012 Conf. North Am. Chapter Assoc. Comput. Linguist. Hum. Lang. Technol. Proc. Conf., pp. 656–666, 2012.
[30] P. Gal, I. Santos, and P. Garc, “Supervised Machine Learning for the Detection of Troll Profiles in Twitter Social Network : Application to a Real Case of Cyberbullying”.
[31] W. Akram and R. Kumar, “A Study on Positive and Negative Effects.
Cite This Article
  • APA Style

    Iqra Naz, Rehhmat Illahi. (2023). Harmful Content on Social Media Detection Using by NLP. Advances, 4(2), 49-59. https://doi.org/10.11648/j.advances.20230402.13

    Copy | Download

    ACS Style

    Iqra Naz; Rehhmat Illahi. Harmful Content on Social Media Detection Using by NLP. Advances. 2023, 4(2), 49-59. doi: 10.11648/j.advances.20230402.13

    Copy | Download

    AMA Style

    Iqra Naz, Rehhmat Illahi. Harmful Content on Social Media Detection Using by NLP. Advances. 2023;4(2):49-59. doi: 10.11648/j.advances.20230402.13

    Copy | Download

  • @article{10.11648/j.advances.20230402.13,
      author = {Iqra Naz and Rehhmat Illahi},
      title = {Harmful Content on Social Media Detection Using by NLP},
      journal = {Advances},
      volume = {4},
      number = {2},
      pages = {49-59},
      doi = {10.11648/j.advances.20230402.13},
      url = {https://doi.org/10.11648/j.advances.20230402.13},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.advances.20230402.13},
      abstract = {Twitter, Facebook and Instagram are the popular social media platforms that allow people to access and connect to a world by a social network to express share and publish information. While online connection via media platforms is immensely desirable and come an unavoidable fact of daily life, the underbelly of social networks may be seen in the form of harmful/objectionable material. Fake news, rumors, hate speech, hostility, and bullying are examples of documented harmful material that are of major concern to society. Such damaging content hurts a negative impact on one's mental health and leads to financial losses that are rarely recoverable. Screening and filtering of such information is thus an urgent requirement. In this paper, we summarize some popular SM like Facebook WHATSAPP, LinkedIn etc. We use some notation like UGC, ML, and AI etc. In this review paper, focuses on methods for detecting harmful parts through natural language processing. The next phase looks at how to moderate this material.},
     year = {2023}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Harmful Content on Social Media Detection Using by NLP
    AU  - Iqra Naz
    AU  - Rehhmat Illahi
    Y1  - 2023/07/13
    PY  - 2023
    N1  - https://doi.org/10.11648/j.advances.20230402.13
    DO  - 10.11648/j.advances.20230402.13
    T2  - Advances
    JF  - Advances
    JO  - Advances
    SP  - 49
    EP  - 59
    PB  - Science Publishing Group
    SN  - 2994-7200
    UR  - https://doi.org/10.11648/j.advances.20230402.13
    AB  - Twitter, Facebook and Instagram are the popular social media platforms that allow people to access and connect to a world by a social network to express share and publish information. While online connection via media platforms is immensely desirable and come an unavoidable fact of daily life, the underbelly of social networks may be seen in the form of harmful/objectionable material. Fake news, rumors, hate speech, hostility, and bullying are examples of documented harmful material that are of major concern to society. Such damaging content hurts a negative impact on one's mental health and leads to financial losses that are rarely recoverable. Screening and filtering of such information is thus an urgent requirement. In this paper, we summarize some popular SM like Facebook WHATSAPP, LinkedIn etc. We use some notation like UGC, ML, and AI etc. In this review paper, focuses on methods for detecting harmful parts through natural language processing. The next phase looks at how to moderate this material.
    VL  - 4
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Department of Computer Science and Information Technology, Ghazi University, Dera Ghazi Khan, Pakistan

  • Department of Computer Science and Information Technology, Ghazi University, Dera Ghazi Khan, Pakistan

  • Sections