Yuyang Zhang
2022
Improving Few-Shot Relation Classification by Prototypical Representation Learning with Definition Text
Li Zhenzhen
|
Yuyang Zhang
|
Jian-Yun Nie
|
Dongsheng Li
Findings of the Association for Computational Linguistics: NAACL 2022
Few-shot relation classification is difficult because the few instances available may not represent well the relation patterns. Some existing approaches explored extra information such as relation definition, in addition to the instances, to learn a better relation representation. However, the encoding of the extra information has been performed independently from the labeled instances. In this paper, we propose to learn a prototype encoder from relation definition in a way that is useful for relation instance classification. To this end, we use a joint training approach to train both a prototype encoder from definition and an instance encoder. Extensive experiments on several datasets demonstrate the effectiveness and usefulness of our prototype encoder from definition text, enabling us to outperform state-of-the-art approaches.
2021
DyLex: Incorporating Dynamic Lexicons into BERT for Sequence Labeling
Baojun Wang
|
Zhao Zhang
|
Kun Xu
|
Guang-Yuan Hao
|
Yuyang Zhang
|
Lifeng Shang
|
Linlin Li
|
Xiao Chen
|
Xin Jiang
|
Qun Liu
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
Incorporating lexical knowledge into deep learning models has been proved to be very effective for sequence labeling tasks. However, previous works commonly have difficulty dealing with large-scale dynamic lexicons which often cause excessive matching noise and problems of frequent updates. In this paper, we propose DyLex, a plug-in lexicon incorporation approach for BERT based sequence labeling tasks. Instead of leveraging embeddings of words in the lexicon as in conventional methods, we adopt word-agnostic tag embeddings to avoid re-training the representation while updating the lexicon. Moreover, we employ an effective supervised lexical knowledge denoising method to smooth out matching noise. Finally, we introduce a col-wise attention based knowledge fusion mechanism to guarantee the pluggability of the proposed framework. Experiments on ten datasets of three tasks show that the proposed framework achieves new SOTA, even with very large scale lexicons.
Search
Co-authors
- Li Zhenzhen 1
- Jian-Yun Nie 1
- Dongsheng Li 1
- Baojun Wang 1
- Zhao Zhang 1
- show all...