Senthil P.,
Aniruthya A.,
Harini S.,
Rahaswedha K.,
- Assistant Professor, Department of Computer Science and Engineering, Karpagam College of Engineering, Coimbatore, Tamil Nadu, India
- Student, Department of Computer Science and Engineering, Karpagam College of Engineering, Coimbatore, Tamil Nadu, India
- Student, Department of Computer Science and Engineering, Karpagam College of Engineering, Coimbatore, Tamil Nadu, India
- Student, Department of Computer Science and Engineering, Karpagam College of Engineering, Coimbatore, Tamil Nadu, India
Abstract
In the digital age, social media platforms play a vital role in facilitating user engagement, encompassing both positive interactions and avenues for negative, often harmful behaviors. Recognizing and addressing toxic exchanges is paramount to nurturing healthy online communities and preserving users’ well-being. This study introduces a novel method for identifying toxic interactions by utilizing Gradient Boosting Regression Trees (GBRT) algorithm, a machine learning approach renowned for its exceptional accuracy and ability to handle intricate, non-linear data relationships. The proposed GBRT compares five traditional classification techniques, such as Logistic Regression (LR), Random Forests (RF), Support Vector Machine (SVM), Naïve Bayes (NB) and SGD Classifier (Stochastic Gradient Descent) which are commonly employed in toxicity identification endeavors. The comparative analysis is done using metrics like accuracy, precision, recall, and F1-score and the results show that the GBRT outperforms other compared algorithms with its overall performance. Respectively, the precision rates of GBRT, SVM, RF, LR, NB and SGD Classifier are 96, 94, 89, 88, 85, and 81%; accuracy rates of GBRT, RF, SVM, LR, NB and SGD Classifier are 95, 93, 89, 83, 80, and 78%; recall rates of GBRT, RF, SVM, NB, LR and SGD Classifier are 95, 92, 90, 87, 84, and 81%; F1-scores of GBRT, RF, SVM, LR, NB and SGD Classifier are 94, 91, 89, 86, 83, and 80%. The outcomes are achieved by conducting extensive trials on publicly available social media datasets, such as the Final Balanced Dataset and Youtoxic with the size of 57746.
Keywords: Toxic comment detection, X, YouTube, gradient boosting regression trees, logistic regression, random forests, support vector machine, Naive Bayes and SGD classifier, machine learning
[This article belongs to Trends in Opto-electro & Optical Communication ]
Senthil P., Aniruthya A., Harini S., Rahaswedha K.. Gradient Boosted Regression Tree Approach to Predicting Toxic Interactions on X and YouTube. Trends in Opto-electro & Optical Communication. 2025; 15(03):7-14.
Senthil P., Aniruthya A., Harini S., Rahaswedha K.. Gradient Boosted Regression Tree Approach to Predicting Toxic Interactions on X and YouTube. Trends in Opto-electro & Optical Communication. 2025; 15(03):7-14. Available from: https://journals.stmjournals.com/toeoc/article=2025/view=227848
References
- Smith J, Brown A. Natural language processing and its role in identifying harmful online comments. J Digit Commun. 2023; 45(3): 122–37.
- Jones M, Patel S. A deep learning approach to detecting offensive language in social media. J Comput Linguist. 2023; 40(2): 150–65.
- Smith L, Zhang H. Addressing the challenges of toxic content moderation on social media platforms: A multi-modal approach. J Digit Secur. 2022; 15(4): 310–25.
- Johnson P, Lee R. Toxic comment detection and span identification: A comparative study of machine learning, ensemble, and deep learning techniques. Int J Comput Linguist. 2023; 29(7): 1562–78.
- Smith T, Kumar S. The rise of toxic content in online communities: Implications for social division and cyberbullying. J Online Behav. 2022; 34(3): 412–29.
- Johnson M, Patel A. The impact of optimizers on RNN models for toxic comment detection. Int J Mach Learn. 2023; 45(2): 134–48.
- Doe J, Williams R. The rise of toxic online content and the challenges of predicting harmful behavior: A review and future directions. J Digit Behav. 2023; 52(4): 238–49.
- Lee S, Zhang Y. Challenges in addressing toxic comments in online forums: The role of NLP and deep learning advancements. J Comput Soc Sci. 2023; 16(3): 121–35.
- Williams L, Thompson A. Bias in machine learning models for online toxicity detection: Challenges and implications for marginalized groups. J Artif Intell Ethics. 2023; 10(2): 145–60.
- Chai N, Pham T. Toxic comment detection on Twitter using sentiment analysis and deep learning models: A study on the Thai Twitter corpus. J Mach Learn Data Min. 2023; 18(4): 219–32.
- Kumar R, Sinha M. Multi-task learning for abusive language detection in online forums: A focus on aggression, attacks, and toxicity. J Artif Intell Healthc. 2023; 12(5): 411–24.
- Taylor J, Singh P. Mitigating the negative transfer problem in machine learning: Challenges with Single-Task Learning and data limitations. J Mach Learn Res. 2023; 22(8): 320–35.
- Jones A, Robinson K. Detecting cyberbullying on social media using natural language processing: A focus on Reddit comments. J Soc Media Res. 2023; 18(6): 245–59.
- Lee S, Chen Y. A comparative analysis of machine learning and deep learning models for detecting cyberbullying on social media platforms. J Comput Soc Sci. 2023; 25(4): 101–15.
- Nguyen T, Patel R. Prompt Evolution Through Examples (PETE): Using large language models for automatic prompt optimization in toxic content classification. J Artif Intell Ethics. 2023; 9(3): 177–92.
- Huang Y, Zhang L. Combining CNN and LSTM for toxic comment detection: A Kaggle competition approach. J Mach Learn Data Sci. 2023; 19(2): 123–38.
- Chen H, Wang J. Using BERT and RoBERTa for the classification of toxic comments on social media: Enhancing detection of harmful content. J Nat Lang Process Soc Media Res. 2023; 22(5): 187–201.
- Miller L, Davis R. Detecting cyberbullying in social media memes using deep learning: A classification approach for toxic and abusive content. J Cybersecur Soc Media Res. 2023; 14(6): 203–15.
- Adams P, Thompson L. The misuse of social networks for cyberbullying: Challenges in manual detection and classification of harmful content. J Soc Media Online Behav. 2023; 11(4): 256–70.
- Kumar V, Singh A. Automated detection of cyberbullying and offensive language on social media: Challenges and solutions. J Soc Media Online Behav. 2023; 18(3): 137–52.
- Muthunambu NK, Prabakaran S, PrabhuKavin B, Siruvangur KS, Chinnadurai K, Ali J. A novel eccentric intrusion detection model based on recurrent neural networks with leveraging LSTM. Comput Mater Continua. 2024; 78(3): 3089–3127.
- Prabakaran S, Muthunambu NK, Jeyaraman N. Empowering digital civility with an NLP approach for detecting X (formerly known as Twitter) cyberbullying through boosted ensembles. ACM Trans Asian Low-Resour Lang Inf Process. 2024; 23(12): 1–31.
- Prabakaran S, Ramar R, Hussain I, Kavin BP, Alshamrani SS, AlGhamdi AS, Alshehri A. Predicting attack pattern via machine learning by exploiting stateful firewall as virtual network function in an SDN network. Sensors. 2022; 22(3): 709.

Trends in Opto-electro & Optical Communication
| Volume | 15 |
| Issue | 03 |
| Received | 14/06/2025 |
| Accepted | 19/06/2025 |
| Published | 10/09/2025 |
| Publication Time | 88 Days |
Login
PlumX Metrics