Automatic Detection of Nastiness and Early Signs of Cyberbullying Incidents on Social Media

dc.contributor.advisorSolorio, Thamar
dc.contributor.committeeMemberGabriel, Edgar
dc.contributor.committeeMemberVerma, Rakesh M.
dc.contributor.committeeMemberHuang, Ruihong
dc.creatorSafi Samghabadi, Niloofar
dc.creator.orcid0000-0003-4435-6546 2020 2020
dc.description.abstractAlthough social media has made it easy for people to connect on an unlimited virtual space, it has also opened doors to people who misuse it to bully others. Nowadays, abusive behavior and cyberbullying are considered as major issues in cyberspace that can seriously affect the mental and physical health of victims. However, due to the growing number of social media users, manual moderation of online content is impractical. Available automatic systems for hate speech and cyberbullying detection fail to make opportune predictions, which makes them ineffective for warning the possible victims of these attacks. In this thesis, we aim at advancing new technology that will help to protect vulnerable online users against cyber attacks. As a first approximation to this goal, we develop computational methods to identify extremely aggressive texts automatically. We start by exploiting a wide range of linguistic features to create a machine learning model to detect online abusive content. Then, we build a deep neural architecture to identify offensive content in online short and noisy texts more precisely, by incorporating emotion information into textual representations. We further expand these methods and propose a Natural Language Processing system that constantly monitors online conversations, and triggers an alert when a possible case of cyberbullying is happening. We design a new evaluation framework, and show that our system is able to provide timely and accurate cyberbullying predictions, based on limited evidence. In this research, we are mainly concerned about kids and young adults, as the most vulnerable group of users under online attacks. To this end, we propose new language resources for both tasks of abusive language and cyberbullying detection from social media platforms that are specifically popular among youth. Furthermore, within our experimentations, we discuss the differences among these corpora and the other available resources that include data on adult topics.
dc.description.departmentComputer Science, Department of
dc.format.digitalOriginborn digital
dc.identifier.citationPortions of this document appear in: Samghabadi, Niloofar Safi, Suraj Maharjan, Alan Sprague, Raquel Diaz-Sprague, and Thamar Solorio. "Detecting nastiness in social media." In Proceedings of the First Workshop on Abusive Language Online, pp. 63-72. 2017. And in: Samghabadi, Niloofar Safi, Deepthi Mave, Sudipta Kar, and Thamar Solorio. "RiTual-uh at TRAC 2018 shared task: aggression identification." arXiv preprint arXiv:1807.11712 (2018).
dc.rightsThe author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. UH Libraries has secured permission to reproduce any and all previously published materials contained in the work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).
dc.subjectNatural Language Processing
dc.subjectAbusive Language Detection
dc.subjectCyberbullying Detection
dc.subjectEarly Text Categorization
dc.titleAutomatic Detection of Nastiness and Early Signs of Cyberbullying Incidents on Social Media
dc.type.genreThesis of Natural Sciences and Mathematics Science, Department of Science of Houston of Philosophy


Original bundle

Now showing 1 - 1 of 1
Thumbnail Image
2.44 MB
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
4.44 KB
Plain Text
No Thumbnail Available
1.82 KB
Plain Text