AbuseEval v1.0
Link to publication: http://www.lrec-conf.org/proceedings/lrec2020/pdf/2020.lrec-1.760.pdf
Link to data: https://github.com/tommasoc80/AbuseEval
Task description: Explicitness annotation of offensive and abusive content
Details of task: Enriched versions of the OffensEval/OLID dataset with the distinction of explicit/implicit offensive messages and the new dimension for abusive messages. Labels for offensive language: EXPLICIT, IMPLICT, NOT; Labels for abusive language: EXPLICIT, IMPLICT, NOTABU
Size of dataset: 14100
Percentage abusive: 20.75
Language: English
Level of annotation: Tweets
Platform: Twitter
Medium: Text
Reference: Caselli, T., Basile, V., Jelena, M., Inga, K., and Michael, G. 2020. "I feel offended, don’t be abusive! implicit/explicit messages in offensive and abusive language". The 12th Language Resources and Evaluation Conference (pp. 6193-6202). European Language Resources Association.