Kyungmo Kim
2023
Context and Literacy Aware Learnable Metric for Text Simplification
Jeongwon Kwak
|
Hyeryun Park
|
Kyungmo Kim
|
Jinwook Choi
Proceedings of the Third Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Automatic evaluation of text simplification is important; but assessing its transformation into simpler sentences can be challenging for various reasons. However, the most commonly used metric in text simplification, SARI, fails to capture the difficulty of generating words that are not present in the references, regardless of their meaning. We propose a new learnable evaluation metric that decomposes and reconstructs sentences to simultaneously measure the similarity and difficulty of sentences within a single system. Through experiments, we confirm that it exhibited the highest similarity in correlation with the human evaluation.
2020
Feature Difference Makes Sense: A medical image captioning model exploiting feature difference and tag information
Hyeryun Park
|
Kyungmo Kim
|
Jooyoung Yoon
|
Seongkeun Park
|
Jinwook Choi
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop
Medical image captioning can reduce the workload of physicians and save time and expense by automatically generating reports. However, current datasets are small and limited, creating additional challenges for researchers. In this study, we propose a feature difference and tag information combined long short-term memory (LSTM) model for chest x-ray report generation. A feature vector extracted from the image conveys visual information, but its ability to describe the image is limited. Other image captioning studies exhibited improved performance by exploiting feature differences, so the proposed model also utilizes them. First, we propose a difference and tag (DiTag) model containing the difference between the patient and normal images. Then, we propose a multi-difference and tag (mDiTag) model that also contains information about low-level differences, such as contrast, texture, and localized area. Evaluation of the proposed models demonstrates that the mDiTag model provides more information to generate captions and outperforms all other models.
Search