%0 Conference Proceedings %T Entity at SemEval-2021 Task 5: Weakly Supervised Token Labelling for Toxic Spans Detection %A Jain, Vaibhav %A Naghshnejad, Mina %Y Palmer, Alexis %Y Schneider, Nathan %Y Schluter, Natalie %Y Emerson, Guy %Y Herbelot, Aurelie %Y Zhu, Xiaodan %S Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021) %D 2021 %8 August %I Association for Computational Linguistics %C Online %F jain-naghshnejad-2021-entity %X Detection of toxic spans - detecting toxicity of contents in the granularity of tokens - is crucial for effective moderation of online discussions. The baseline approach for this problem using the transformer model is to add a token classification head to the language model and fine-tune the layers with the token labeled dataset. One of the limitations of such a baseline approach is the scarcity of labeled data. To improve the results, We studied leveraging existing public datasets for a related but different task of entire comment/sentence classification. We propose two approaches: the first approach fine-tunes transformer models that are pre-trained on sentence classification samples. In the second approach, we perform weak supervision with soft attention to learn token level labels from sentence labels. Our experiments show improvements in the F1 score over the baseline approach. The implementation has been released publicly. %R 10.18653/v1/2021.semeval-1.127 %U https://aclanthology.org/2021.semeval-1.127 %U https://doi.org/10.18653/v1/2021.semeval-1.127 %P 935-940