Please use this identifier to cite or link to this item: http://ir.futminna.edu.ng:8080/jspui/handle/123456789/7227
Full metadata record
DC FieldValueLanguage
dc.contributor.authorUmar, Abubakar-
dc.contributor.authorBashir, Sulaimon Adebayo-
dc.contributor.authorLaud, Charles Ochei-
dc.contributor.authorIbrahim, Adeyanju-
dc.date.accessioned2021-07-07T22:26:28Z-
dc.date.available2021-07-07T22:26:28Z-
dc.date.issued2018-
dc.identifier.citationUmar, A., Bashir, S. A., Laud, C. O., & Adeyanju, I. A. (2018). Profiling Inappropriate Users’ Tweets Using Deep Long Short-Term Memory (LSTM) Neural Network. i-manager's Journal on Pattern Recognition, 5(4), 27en_US
dc.identifier.urihttp://repository.futminna.edu.ng:8080/jspui/handle/123456789/7227-
dc.description.abstractIn recent times, big Internet companies have come under increased pressure from governments and NGOs to remove inappropriate materials from social media platforms (e.g., Twitter and Facebook, YouTube). A typical example of this problem is the posting of hateful, abusive, and violent tweets on Twitter which has been blamed for inciting hatred, violence and causing societal disturbances. Manual identification of such tweets and the people who post these tweets is very difficult because of the large number of active users and the frequency with which such tweets are posted. Existing approaches for identifying inappropriate tweets have focused on the detection of such tweets without identifying the users who post them. This paper proposes an approach that can automatically identify different types of inappropriate tweets together with the users who post them. The proposed approach is based on a user profiling algorithm that uses a deep Long Short-Term Memory (LSTM) based neural network trained to detect abusive language. With the support of word embedding features learned from the training set, the algorithm is able to classify the tweets of users into different abusive language categories. Thereafter, the user profiling algorithm uses the classes assigned to the tweets of each user to profile each user into different abusive language category. Experiments on the test set show that the deep LSTM-based abusive language detection model reached an accuracy of 89.14% on detecting whether a tweet is bigotry, offensive, racist, extremism-related and neutral. Also, the user profiling algorithm obtained an accuracy of 83.33% in predicting whether a user is a bigot, racist, extremist, uses offensive language and neutralen_US
dc.language.isoenen_US
dc.publisheri-manager Publicationsen_US
dc.subjectTwitteren_US
dc.subjectTweet Classificationen_US
dc.subjectUser Profiling Algorithmen_US
dc.subjectFeature Representationen_US
dc.subjectDeep Learningen_US
dc.titleProfiling Inappropriate Users’ Tweets Using Deep Long Short-Term Memory (LSTM) Neural Networken_US
dc.typeArticleen_US
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
profiling user tweet.pdf402.79 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.