A Concise Model for Multi-Criteria Chinese Word Segmentation with Transformer Encoder

Xipeng Qiu, Hengzhi Pei, Hang Yan, Xuanjing Huang


Abstract
Multi-criteria Chinese word segmentation (MCCWS) aims to exploit the relations among the multiple heterogeneous segmentation criteria and further improve the performance of each single criterion. Previous work usually regards MCCWS as different tasks, which are learned together under the multi-task learning framework. In this paper, we propose a concise but effective unified model for MCCWS, which is fully-shared for all the criteria. By leveraging the powerful ability of the Transformer encoder, the proposed unified model can segment Chinese text according to a unique criterion-token indicating the output criterion. Besides, the proposed unified model can segment both simplified and traditional Chinese and has an excellent transfer capability. Experiments on eight datasets with different criteria show that our model outperforms our single-criterion baseline model and other multi-criteria models. Source codes of this paper are available on Github.
Anthology ID:
2020.findings-emnlp.260
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Venues:
EMNLP | Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2887–2897
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.260
DOI:
10.18653/v1/2020.findings-emnlp.260
Bibkey:
Copy Citation:
PDF:
https://aclanthology.org/2020.findings-emnlp.260.pdf