Yongyu Lei
2023
Doolittle: Benchmarks and Corpora for Academic Writing Formalization
Shizhe Diao
|
Yongyu Lei
|
Liangming Pan
|
Tianqing Fang
|
Wangchunshu Zhou
|
Sedrick Keh
|
Min-Yen Kan
|
Tong Zhang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Improving the quality of academic writing is a meaningful but challenging task. Conventional methods of language refinement focus on narrow, specific linguistic features within isolated sentences, such as grammatical errors and improper word use. We propose a more general task, Academic Writing Formalization (AWF), to improve the overall quality of formal academic writing at the paragraph level. We formulate this language refinement task as a formal text style transfer task which transfers informal-academic text to formal-academic and contribute a large-scale non-parallel dataset, Doolittle, for this purpose. Concurrently, we apply a method named metric-oriented reinforcement learning (MORL) to two large language models (LLM) where we incorporate different levels of automatic feedback into the training process. Our experiments reveal that existing text transfer models and grammatical error correction models address certain aspects of AWF but still have a significant performance gap compared to human performance. Meanwhile, language models fine-tuned with our MORL method exhibit considerably improved performance, rivaling the latest chatbot ChatGPT, but still have a non-negligible gap compared to the ground truth formal-academic texts in Doolittle.
Search
Co-authors
- Shizhe Diao 1
- Liangming Pan 1
- Tianqing Fang 1
- Wangchunshu Zhou 1
- Sedrick Keh 1
- show all...