Hyojun Go
2023
Cross Encoding as Augmentation: Towards Effective Educational Text Classification
Hyun Seung Lee
|
Seungtaek Choi
|
Yunsung Lee
|
Hyeongdon Moon
|
Shinhyeok Oh
|
Myeongho Jeong
|
Hyojun Go
|
Christian Wallraven
Findings of the Association for Computational Linguistics: ACL 2023
Text classification in education, usually called auto-tagging, is the automated process of assigning relevant tags to educational content, such as questions and textbooks. However, auto-tagging suffers from a data scarcity problem, which stems from two major challenges: 1) it possesses a large tag space and 2) it is multi-label. Though a retrieval approach is reportedly good at low-resource scenarios, there have been fewer efforts to directly address the data scarcity problem. To mitigate these issues, here we propose a novel retrieval approach CEAA that provides effective learning in educational text classification. Our main contributions are as follows: 1) we leverage transfer learning from question-answering datasets, and 2) we propose a simple but effective data augmentation method introducing cross-encoder style texts to a bi-encoder architecture for more efficient inference. An extensive set of experiments shows that our proposed method is effective in multi-label scenarios and low-resource tags compared to state-of-the-art models.
Evaluation of Question Generation Needs More References
Shinhyeok Oh
|
Hyojun Go
|
Hyeongdon Moon
|
Yunsung Lee
|
Myeongho Jeong
|
Hyun Seung Lee
|
Seungtaek Choi
Findings of the Association for Computational Linguistics: ACL 2023
Question generation (QG) is the task of generating a valid and fluent question based on a given context and the target answer. According to various purposes, even given the same context, instructors can ask questions about different concepts, and even the same concept can be written in different ways. However, the evaluation for QG usually depends on single reference-based similarity metrics, such as n-gram-based metric or learned metric, which is not sufficient to fully evaluate the potential of QG methods. To this end, we propose to paraphrase the reference question for a more robust QG evaluation. Using large language models such as GPT-3, we created semantically and syntactically diverse questions, then adopt the simple aggregation of the popular evaluation metrics as the final scores. Through our experiments, we found that using multiple (pseudo) references is more effective for QG evaluation while showing a higher correlation with human evaluations than evaluation with a single reference.
Search
Co-authors
- Hyun Seung Lee 2
- Seungtaek Choi 2
- Yunsung Lee 2
- Hyeongdon Moon 2
- Shinhyeok Oh 2
- show all...