Abstract
Systems that can automatically detect offensive content are of great value, for example, to provide protective settings for users or assist social media supervisors with removal of odious language. In this paper, we present three machine learning models developed at University of Tripoli, Libya, for the detection of misogyny in Arabic colloquial tweets. We present the results obtained with these models in the first Arabic Misogyny Identification shared task ArMI’21, a sub track of HASOC@FIRE2021. With our first model (optimized BERT-based pipelines), we placed as the second-ranked team on sub-task A: Misogyny Content Identification, and as the third-ranked team on sub-task B: Misogyny Behavior Identification.