Few-shot Knowledge Graph-to-Text Generation with Pretrained Language Models
Junyi Li | Tianyi Tang | Wayne Xin Zhao | Zhicheng Wei | Nicholas Jing Yuan | Ji-Rong Wen
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
A Rigorous Study on Named Entity Recognition: Can Fine-tuning Pretrained Model Lead to the Promised Land?
Hongyu Lin | Yaojie Lu | Jialong Tang | Xianpei Han | Le Sun | Zhicheng Wei | Nicholas Jing Yuan
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Fine-tuning pretrained model has achieved promising performance on standard NER benchmarks. Generally, these benchmarks are blessed with strong name regularity, high mention coverage and sufficient context diversity. Unfortunately, when scaling NER to open situations, these advantages may no longer exist. And therefore it raises a critical question of whether previous creditable approaches can still work well when facing these challenges. As there is no currently available dataset to investigate this problem, this paper proposes to conduct randomization test on standard benchmarks. Specifically, we erase name regularity, mention coverage and context diversity respectively from the benchmarks, in order to explore their impact on the generalization ability of models. To further verify our conclusions, we also construct a new open NER dataset that focuses on entity types with weaker name regularity and lower mention coverage to verify our conclusion. From both randomization test and empirical experiments, we draw the conclusions that 1) name regularity is critical for the models to generalize to unseen mentions; 2) high mention coverage may undermine the model generalization ability and 3) context patterns may not require enormous data to capture when using pretrained encoders.
- Nicholas Jing Yuan 2
- Junyi Li 1
- Tianyi Tang 1
- Wayne Xin Zhao 1
- Ji-Rong Wen 1
- show all...