Jin-Xia Huang


2024

pdf bib
More Insightful Feedback for Tutoring: Enhancing Generation Mechanisms and Automatic Evaluation
Wencke Liermann | Jin-Xia Huang | Yohan Lee | Kong Joo Lee
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Incorrect student answers can become valuable learning opportunities, provided that the student understands where they went wrong and why. To this end, rather than being given the correct answer, students should receive elaborated feedback on how to correct a mistake on their own. Highlighting the complex demands that the generation of such feedback places on a model’s input utilization abilities, we propose two extensions to the training pipeline. Firstly, we employ a KL regularization term between a standard and enriched input format to achieve more targeted input representations. Secondly, we add a preference optimization step to encourage student answer-adaptive feedback generation. The effectiveness of those extensions is underlined by a significant increase in model performance of 3.3 METEOR points. We go beyond traditional surface form-based metrics to assess two important dimensions of feedback quality, i.e., faithfulness and informativeness. Hereby, we are the first to propose an automatic metric measuring the degree to which feedback divulges the correct answer, that we call Informativeness Index I2. We verify in how far each metric captures feedback quality.

2021

pdf bib
Document-Grounded Goal-Oriented Dialogue Systems on Pre-Trained Language Model with Diverse Input Representation
Boeun Kim | Dohaeng Lee | Sihyung Kim | Yejin Lee | Jin-Xia Huang | Oh-Woog Kwon | Harksoo Kim
Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering (DialDoc 2021)

Document-grounded goal-oriented dialog system understands users’ utterances, and generates proper responses by using information obtained from documents. The Dialdoc21 shared task consists of two subtasks; subtask1, finding text spans associated with users’ utterances from documents, and subtask2, generating responses based on information obtained from subtask1. In this paper, we propose two models (i.e., a knowledge span prediction model and a response generation model) for the subtask1 and the subtask2. In the subtask1, dialogue act losses are used with RoBERTa, and title embeddings are added to input representation of RoBERTa. In the subtask2, various special tokens and embeddings are added to input representation of BART’s encoder. Then, we propose a method to assign different difficulty scores to leverage curriculum learning. In the subtask1, our span prediction model achieved F1-scores of 74.81 (ranked at top 7) and 73.41 (ranked at top 5) in test-dev phase and test phase, respectively. In the subtask2, our response generation model achieved sacreBLEUs of 37.50 (ranked at top 3) and 41.06 (ranked at top 1) in in test-dev phase and test phase, respectively.

2004

pdf bib
A Statistical Model for Hangeul-Hanja Conversion in Terminology Domain
Jin-Xia Huang | Sun-Mee Bae | Key-sun Choi
Proceedings of the Third SIGHAN Workshop on Chinese Language Processing

2003

pdf bib
A unified statistical model for generalized translation memory system
Jin-Xia Huang | Wei Wang | Ming Zhou
Proceedings of Machine Translation Summit IX: Papers

We introduced, for Translation Memory System, a statistical framework, which unifies the different phases in a Translation Memory System by letting them constrain each other, and enables Translation Memory System a statistical qualification. Compared to traditional Translation Memory Systems, our model operates at a fine grained sub-sentential level such that it improves the translation coverage. Compared with other approaches that exploit sub-sentential benefits, it unifies the processes of source string segmentation, best example selection, and translation generation by making them constrain each other via the statistical confidence of each step. We realized this framework into a prototype system. Compared with an existing product Translation Memory System, our system exhibits obviously better performance in the "assistant quality metric" and gains improvements in the range of 26.3% to 55.1% in the "translation efficiency metric".

2002

pdf bib
Structure Alignment Using Bilingual Chunking
Wei Wang | Ming Zhou | Jin-Xia Huang | Chang-Ning Huang
COLING 2002: The 19th International Conference on Computational Linguistics

2000

pdf bib
Using Bilingual Semantic Information in Chinese-Korean Word Alignment
Jin-Xia Huang | Key-Sun Choi
Proceedings of the 14th Pacific Asia Conference on Language, Information and Computation

pdf bib
Chinese-Korean Word Alignment Based on Linguistic Comparison
Jin-Xia Huang | Key-Sun Choi
Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics