Ning Xu
2022
Transfer Learning and Prediction Consistency for Detecting Offensive Spans of Text
Amir Pouran Ben Veyseh
|
Ning Xu
|
Quan Tran
|
Varun Manjunatha
|
Franck Dernoncourt
|
Thien Nguyen
Findings of the Association for Computational Linguistics: ACL 2022
Toxic span detection is the task of recognizing offensive spans in a text snippet. Although there has been prior work on classifying text snippets as offensive or not, the task of recognizing spans responsible for the toxicity of a text is not explored yet. In this work, we introduce a novel multi-task framework for toxic span detection in which the model seeks to simultaneously predict offensive words and opinion phrases to leverage their inter-dependencies and improve the performance. Moreover, we introduce a novel regularization mechanism to encourage the consistency of the model predictions across similar inputs for toxic span detection. Our extensive experiments demonstrate the effectiveness of the proposed model compared to strong baselines.
2021
Self-Attention Graph Residual Convolutional Networks for Event Detection with dependency relations
Anan Liu
|
Ning Xu
|
Haozhe Liu
Findings of the Association for Computational Linguistics: EMNLP 2021
Event detection (ED) task aims to classify events by identifying key event trigger words embedded in a piece of text. Previous research have proved the validity of fusing syntactic dependency relations into Graph Convolutional Networks(GCN). While existing GCN-based methods explore latent node-to-node dependency relations according to a stationary adjacency tensor, an attention-based dynamic tensor, which can pay much attention to the key node like event trigger or its neighboring nodes, has not been developed. Simultaneously, suffering from the phenomenon of graph information vanishing caused by the symmetric adjacency tensor, existing GCN models can not achieve higher overall performance. In this paper, we propose a novel model Self-Attention Graph Residual Convolution Networks (SA-GRCN) to mine node-to-node latent dependency relations via self-attention mechanism and introduce Graph Residual Network (GResNet) to solve graph information vanishing problem. Specifically, a self-attention module is constructed to generate an attention tensor, representing the dependency attention scores of all words in the sentence. Furthermore, a graph residual term is added to the baseline SA-GCN to construct a GResNet. Considering the syntactically connection of the network input, we initialize the raw adjacency tensor without processed by the self-attention module as the residual term. We conduct experiments on the ACE2005 dataset and the results show significant improvement over competitive baseline methods.
Search
Co-authors
- Amir Pouran Ben Veyseh 1
- Quan Hung Tran 1
- Varun Manjunatha 1
- Franck Dernoncourt 1
- Thien Nguyen 1
- show all...