Ding-Xuan Zhou
2022
Enhancing Automatic Readability Assessment with Pre-training and Soft Labels for Ordinal Regression
Jinshan Zeng
|
Yudong Xie
|
Xianglong Yu
|
John Lee
|
Ding-Xuan Zhou
Findings of the Association for Computational Linguistics: EMNLP 2022
The readability assessment task aims to assign a difficulty grade to a text. While neural models have recently demonstrated impressive performance, most do not exploit the ordinal nature of the difficulty grades, and make little effort for model initialization to facilitate fine-tuning. We address these limitations with soft labels for ordinal regression, and with model pre-training through prediction of pairwise relative text difficulty. We incorporate these two components into a model based on hierarchical attention networks, and evaluate its performance on both English and Chinese datasets. Experimental results show that our proposed model outperforms competitive neural models and statistical classifiers on most datasets.
Search