2024
pdf
bib
abs
Tree-of-Question: Structured Retrieval Framework for Korean Question Answering Systems
Dongyub Lee
|
Younghun Jeong
|
Hwa-Yeon Kim
|
Hongyeon Yu
|
Seunghyun Han
|
Taesun Whang
|
Seungwoo Cho
|
Chanhee Lee
|
Gunsu Lee
|
Youngbum Kim
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)
We introduce Korean language-specific RAG-based QA systems, primarily through the innovative Tree-of-Question (ToQ) methodology and enhanced query generation techniques. We address the complex, multi-hop nature of real-world questions by effectively integrating advanced LLMs with nuanced query planning. Our comprehensive evaluations, including a newly created Korean multi-hop QA dataset, demonstrate our method’s ability to elevate response validity and accuracy, especially in deeper levels of reasoning. This paper not only showcases significant progress in handling the intricacies of Korean linguistic structures but also sets a new standard in the development of context-aware and linguistically sophisticated QA systems.
2021
pdf
bib
abs
Deep Context- and Relation-Aware Learning for Aspect-based Sentiment Analysis
Shinhyeok Oh
|
Dongyub Lee
|
Taesun Whang
|
IlNam Park
|
Seo Gaeun
|
EungGyun Kim
|
Harksoo Kim
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Existing works for aspect-based sentiment analysis (ABSA) have adopted a unified approach, which allows the interactive relations among subtasks. However, we observe that these methods tend to predict polarities based on the literal meaning of aspect and opinion terms and mainly consider relations implicitly among subtasks at the word level. In addition, identifying multiple aspect–opinion pairs with their polarities is much more challenging. Therefore, a comprehensive understanding of contextual information w.r.t. the aspect and opinion are further required in ABSA. In this paper, we propose Deep Contextualized Relation-Aware Network (DCRAN), which allows interactive relations among subtasks with deep contextual information based on two modules (i.e., Aspect and Opinion Propagation and Explicit Self-Supervised Strategies). Especially, we design novel self-supervised strategies for ABSA, which have strengths in dealing with multiple aspects. Experimental results show that DCRAN significantly outperforms previous state-of-the-art methods by large margins on three widely used benchmarks.
pdf
bib
abs
Two Heads are Better than One? Verification of Ensemble Effect in Neural Machine Translation
Chanjun Park
|
Sungjin Park
|
Seolhwa Lee
|
Taesun Whang
|
Heuiseok Lim
Proceedings of the Second Workshop on Insights from Negative Results in NLP
In the field of natural language processing, ensembles are broadly known to be effective in improving performance. This paper analyzes how ensemble of neural machine translation (NMT) models affect performance improvement by designing various experimental setups (i.e., intra-, inter-ensemble, and non-convergence ensemble). To an in-depth examination, we analyze each ensemble method with respect to several aspects such as different attention models and vocab strategies. Experimental results show that ensembling is not always resulting in performance increases and give noteworthy negative findings.
pdf
bib
abs
Capturing Speaker Incorrectness: Speaker-Focused Post-Correction for Abstractive Dialogue Summarization
Dongyub Lee
|
Jungwoo Lim
|
Taesun Whang
|
Chanhee Lee
|
Seungwoo Cho
|
Mingun Park
|
Heuiseok Lim
Proceedings of the Third Workshop on New Frontiers in Summarization
In this paper, we focus on improving the quality of the summary generated by neural abstractive dialogue summarization systems. Even though pre-trained language models generate well-constructed and promising results, it is still challenging to summarize the conversation of multiple participants since the summary should include a description of the overall situation and the actions of each speaker. This paper proposes self-supervised strategies for speaker-focused post-correction in abstractive dialogue summarization. Specifically, our model first discriminates which type of speaker correction is required in a draft summary and then generates a revised summary according to the required type. Experimental results show that our proposed method adequately corrects the draft summaries, and the revised summaries are significantly improved in both quantitative and qualitative evaluations.
2020
pdf
bib
abs
Reference and Document Aware Semantic Evaluation Methods for Korean Language Summarization
Dongyub Lee
|
Myeong Cheol Shin
|
Taesun Whang
|
Seungwoo Cho
|
Byeongil Ko
|
Daniel Lee
|
EungGyun Kim
|
Jaechoon Jo
Proceedings of the 28th International Conference on Computational Linguistics
Text summarization refers to the process that generates a shorter form of text from the source document preserving salient information. Many existing works for text summarization are generally evaluated by using recall-oriented understudy for gisting evaluation (ROUGE) scores. However, as ROUGE scores are computed based on n-gram overlap, they do not reflect semantic meaning correspondences between generated and reference summaries. Because Korean is an agglutinative language that combines various morphemes into a word that express several meanings, ROUGE is not suitable for Korean summarization. In this paper, we propose evaluation metrics that reflect semantic meanings of a reference summary and the original document, Reference and Document Aware Semantic Score (RDASS). We then propose a method for improving the correlation of the metrics with human judgment. Evaluation results show that the correlation with human judgment is significantly higher for our evaluation metrics than for ROUGE scores.