Sung-Hyon Myaeng

Also published as: Sung H. Myaeng, Sung Hyon Myaeng, Sung-hyon Myaeng


2023

pdf bib
FinePrompt: Unveiling the Role of Finetuned Inductive Bias on Compositional Reasoning in GPT-4
Jeonghwan Kim | Giwon Hong | Sung-Hyon Myaeng | Joyce Whang
Findings of the Association for Computational Linguistics: EMNLP 2023

Compositional reasoning across texts has been a long-standing challenge in natural language processing. With large language models like GPT-4 taking over the field, prompting techniques such as chain-of-thought (CoT) were proposed to unlock compositional, multi-step reasoning capabilities of LLMs. Despite their success, the prompts demand significant human effort to discover and validate them. Our work draws attention to the idea of transferring task-specific inductive biases from finetuned models to prompts, as a way of improving GPT-4’s compositional reasoning capabilities. To leverage these inductive biases, we formulate prompt templates to ease the transfer of inductive biases. The experimental results on multi-hop question answering and numerical reasoning over text show that our proposed prompt scheme shows competitive zero-shot and few-shot performances compared to existing prompts on complicated reasoning tasks, highlighting the importance of adopting the validated biases of the previous paradigm.

2022

pdf bib
Exploiting Numerical-Contextual Knowledge to Improve Numerical Reasoning in Question Answering
Jeonghwan Kim | Junmo Kang | Kyung-min Kim | Giwon Hong | Sung-Hyon Myaeng
Findings of the Association for Computational Linguistics: NAACL 2022

Numerical reasoning over text is a challenging subtask in question answering (QA) that requires both the understanding of texts and numbers. However, existing language models in these numerical reasoning QA models tend to overly rely on the pre-existing parametric knowledge at inference time, which commonly causes hallucination in interpreting numbers. Our work proposes a novel attention masked reasoning model, the NC-BERT, that learns to leverage the number-related contextual knowledge to alleviate the over-reliance on parametric knowledge and enhance the numerical reasoning capabilities of the QA model. The empirical results suggest that understanding of numbers in their context by reducing the parametric knowledge influence, and refining numerical information in the number embeddings lead to improved numerical reasoning accuracy and performance in DROP, a numerical QA dataset.

pdf bib
Graph-Induced Transformers for Efficient Multi-Hop Question Answering
Giwon Hong | Jeonghwan Kim | Junmo Kang | Sung-Hyon Myaeng
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

A graph is a suitable data structure to represent the structural information of text. Recently, multi-hop question answering (MHQA) tasks, which require inter-paragraph/sentence linkages, have come to exploit such properties of a graph. Previous approaches to MHQA relied on leveraging the graph information along with the pre-trained language model (PLM) encoders. However, this trend exhibits the following drawbacks: (i) sample inefficiency while training in a low-resource setting; (ii) lack of reusability due to changes in the model structure or input. Our work proposes the Graph-Induced Transformer (GIT) that applies graph-derived attention patterns directly into a PLM, without the need to employ external graph modules. GIT can leverage the useful inductive bias of graphs while retaining the unperturbed Transformer structure and parameters. Our experiments on HotpotQA successfully demonstrate both the sample efficient characteristic of GIT and its capacity to replace the graph modules while preserving model performance.

2021

pdf bib
Can You Distinguish Truthful from Fake Reviews? User Analysis and Assistance Tool for Fake Review Detection
Jeonghwan Kim | Junmo Kang | Suwon Shin | Sung-Hyon Myaeng
Proceedings of the First Workshop on Bridging Human–Computer Interaction and Natural Language Processing

Customer reviews are useful in providing an indirect, secondhand experience of a product. People often use reviews written by other customers as a guideline prior to purchasing a product. Such behavior signifies the authenticity of reviews in e-commerce platforms. However, fake reviews are increasingly becoming a hassle for both consumers and product owners. To address this issue, we propose You Only Need Gold (YONG), an essential information mining tool for detecting fake reviews and augmenting user discretion. Our experimental results show the poor human performance on fake review detection, substantially improved user capability given our tool, and the ultimate need for user reliance on the tool.

pdf bib
Ultra-High Dimensional Sparse Representations with Binarization for Efficient Text Retrieval
Kyoung-Rok Jang | Junmo Kang | Giwon Hong | Sung-Hyon Myaeng | Joohee Park | Taewon Yoon | Heecheol Seo
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

The semantic matching capabilities of neural information retrieval can ameliorate synonymy and polysemy problems of symbolic approaches. However, neural models’ dense representations are more suitable for re-ranking, due to their inefficiency. Sparse representations, either in symbolic or latent form, are more efficient with an inverted index. Taking the merits of the sparse and dense representations, we propose an ultra-high dimensional (UHD) representation scheme equipped with directly controllable sparsity. UHD’s large capacity and minimal noise and interference among the dimensions allow for binarized representations, which are highly efficient for storage and search. Also proposed is a bucketing method, where the embeddings from multiple layers of BERT are selected/merged to represent diverse linguistic aspects. We test our models with MS MARCO and TREC CAR, showing that our models outperforms other sparse models.

pdf bib
Leveraging Order-Free Tag Relations for Context-Aware Recommendation
Junmo Kang | Jeonghwan Kim | Suwon Shin | Sung-Hyon Myaeng
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Tag recommendation relies on either a ranking function for top-k tags or an autoregressive generation method. However, the previous methods neglect one of two seemingly conflicting yet desirable characteristics of a tag set: orderlessness and inter-dependency. While the ranking approach fails to address the inter-dependency among tags when they are ranked, the autoregressive approach fails to take orderlessness into account because it is designed to utilize sequential relations among tokens. We propose a sequence-oblivious generation method for tag recommendation, in which the next tag to be generated is independent of the order of the generated tags and the order of the ground truth tags occurring in training data. Empirical results on two different domains, Instagram and Stack Overflow, show that our method is significantly superior to the previous approaches.

pdf bib
Have You Seen That Number? Investigating Extrapolation in Question Answering Models
Jeonghwan Kim | Giwon Hong | Kyung-min Kim | Junmo Kang | Sung-Hyon Myaeng
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Numerical reasoning in machine reading comprehension (MRC) has shown drastic improvements over the past few years. While the previous models for numerical MRC are able to interpolate the learned numerical reasoning capabilities, it is not clear whether they can perform just as well on numbers unseen in the training dataset. Our work rigorously tests state-of-the-art models on DROP, a numerical MRC dataset, to see if they can handle passages that contain out-of-range numbers. One of the key findings is that the models fail to extrapolate to unseen numbers. Presenting numbers as digit-by-digit input to the model, we also propose the E-digit number form that alleviates the lack of extrapolation in models and reveals the need to treat numbers differently from regular words in the text. Our work provides a valuable insight into the numerical MRC models and the way to represent number forms in MRC.

pdf bib
Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images
Nyoungwoo Lee | Suwon Shin | Jaegul Choo | Ho-Jin Choi | Sung-Hyon Myaeng
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

In multi-modal dialogue systems, it is important to allow the use of images as part of a multi-turn conversation. Training such dialogue systems generally requires a large-scale dataset consisting of multi-turn dialogues that involve images, but such datasets rarely exist. In response, this paper proposes a 45k multi-modal dialogue dataset created with minimal human intervention. Our method to create such a dataset consists of (1) preparing and pre-processing text dialogue datasets, (2) creating image-mixed dialogues by using a text-to-image replacement technique, and (3) employing a contextual-similarity-based filtering step to ensure the contextual coherence of the dataset. To evaluate the validity of our dataset, we devise a simple retrieval model for dialogue sentence prediction tasks. Automatic metrics and human evaluation results on such tasks show that our dataset can be effectively used as training data for multi-modal dialogue systems which require an understanding of images and text in a context-aware manner. Our dataset and generation code is available at https://github.com/shh1574/multi-modal-dialogue-dataset.

2020

pdf bib
Regularization of Distinct Strategies for Unsupervised Question Generation
Junmo Kang | Giwon Hong | Haritz Puerto San Roman | Sung-Hyon Myaeng
Findings of the Association for Computational Linguistics: EMNLP 2020

Unsupervised question answering (UQA) has been proposed to avoid the high cost of creating high-quality datasets for QA. One approach to UQA is to train a QA model with questions generated automatically. However, the generated questions are either too similar to a word sequence in the context or too drifted from the semantics of the context, thereby making it difficult to train a robust QA model. We propose a novel regularization method based on teacher-student architecture to avoid bias toward a particular question generation strategy and modulate the process of generating individual words when a question is generated. Our experiments demonstrate that we have achieved the goal of generating higher-quality questions for UQA across diverse QA datasets and tasks. We also show that this method can be useful for creating a QA model with few-shot learning.

pdf bib
Roles and Utilization of Attention Heads in Transformer-based Neural Language Models
Jae-young Jo | Sung-Hyon Myaeng
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Sentence encoders based on the transformer architecture have shown promising results on various natural language tasks. The main impetus lies in the pre-trained neural language models that capture long-range dependencies among words, owing to multi-head attention that is unique in the architecture. However, little is known for how linguistic properties are processed, represented, and utilized for downstream tasks among hundreds of attention heads inside the pre-trained transformer-based model. For the initial goal of examining the roles of attention heads in handling a set of linguistic features, we conducted a set of experiments with ten probing tasks and three downstream tasks on four pre-trained transformer families (GPT, GPT2, BERT, and ELECTRA). Meaningful insights are shown through the lens of heat map visualization and utilized to propose a relatively simple sentence representation method that takes advantage of most influential attention heads, resulting in additional performance improvements on the downstream tasks.

pdf bib
Handling Anomalies of Synthetic Questions in Unsupervised Question Answering
Giwon Hong | Junmo Kang | Doyeon Lim | Sung-Hyon Myaeng
Proceedings of the 28th International Conference on Computational Linguistics

Advances in Question Answering (QA) research require additional datasets for new domains, languages, and types of questions, as well as for performance increases. Human creation of a QA dataset like SQuAD, however, is expensive. As an alternative, an unsupervised QA approach has been proposed so that QA training data can be generated automatically. However, the performance of unsupervised QA is much lower than that of supervised QA models. We identify two anomalies in the automatically generated questions and propose how they can be mitigated. We show our approach helps improve unsupervised QA significantly across a number of QA tasks.

2019

pdf bib
Let Me Know What to Ask: Interrogative-Word-Aware Question Generation
Junmo Kang | Haritz Puerto San Roman | Sung-Hyon Myaeng
Proceedings of the 2nd Workshop on Machine Reading for Question Answering

Question Generation (QG) is a Natural Language Processing (NLP) task that aids advances in Question Answering (QA) and conversational assistants. Existing models focus on generating a question based on a text and possibly the answer to the generated question. They need to determine the type of interrogative word to be generated while having to pay attention to the grammar and vocabulary of the question. In this work, we propose Interrogative-Word-Aware Question Generation (IWAQG), a pipelined system composed of two modules: an interrogative word classifier and a QG model. The first module predicts the interrogative word that is provided to the second module to create the question. Owing to an increased recall of deciding the interrogative words to be used for the generated questions, the proposed model achieves new state-of-the-art results on the task of QG in SQuAD, improving from 46.58 to 47.69 in BLEU-1, 17.55 to 18.53 in BLEU-4, 21.24 to 22.33 in METEOR, and from 44.53 to 46.94 in ROUGE-L.

pdf bib
Aligning Open IE Relations and KB Relations using a Siamese Network Based on Word Embedding
Rifki Afina Putri | Giwon Hong | Sung-Hyon Myaeng
Proceedings of the 13th International Conference on Computational Semantics - Long Papers

Open Information Extraction (Open IE) aims at generating entity-relation-entity triples from a large amount of text, aiming at capturing key semantics of the text. Given a triple, the relation expresses the type of semantic relation between the entities. Although relations from an Open IE system are more extensible than those used in a traditional Information Extraction system and a Knowledge Base (KB) such as Knowledge Graphs, the former lacks in semantics; an Open IE relation is simply a sequence of words, whereas a KB relation has a predefined meaning. As a way to provide a meaning to an Open IE relation, we attempt to align it with one of the predefined set of relations used in a KB. Our approach is to use a Siamese network that compares two sequences of word embeddings representing an Open IE relation and a predefined KB relation. In order to make the approach practical, we automatically generate a training dataset using a distant supervision approach instead of relying on a hand-labeled dataset. Our experiment shows that the proposed method can capture the relational semantics better than the recent approaches.

2018

pdf bib
Interpretable Word Embedding Contextualization
Kyoung-Rok Jang | Sung-Hyon Myaeng | Sang-Bum Kim
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

In this paper, we propose a method of calibrating a word embedding, so that the semantic it conveys becomes more relevant to the context. Our method is novel because the output shows clearly which senses that were originally presented in a target word embedding become stronger or weaker. This is possible by utilizing the technique of using sparse coding to recover senses that comprises a word embedding.

2017

pdf bib
Elucidating Conceptual Properties from Word Embeddings
Kyoung-Rok Jang | Sung-Hyon Myaeng
Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications

In this paper, we introduce a method of identifying the components (i.e. dimensions) of word embeddings that strongly signifies properties of a word. By elucidating such properties hidden in word embeddings, we could make word embeddings more interpretable, and also could perform property-based meaning comparison. With the capability, we can answer questions like “To what degree a given word has the property cuteness?” or “In what perspective two words are similar?”. We verify our method by examining how the strength of property-signifying components correlates with the degree of prototypicality of a target word.

pdf bib
A Computational Study on Word Meanings and Their Distributed Representations via Polymodal Embedding
Joohee Park | Sung-hyon Myaeng
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)

A distributed representation has become a popular approach to capturing a word meaning. Besides its success and practical value, however, questions arise about the relationships between a true word meaning and its distributed representation. In this paper, we examine such a relationship via polymodal embedding approach inspired by the theory that humans tend to use diverse sources in developing a word meaning. The result suggests that the existing embeddings lack in capturing certain aspects of word meanings which can be significantly improved by the polymodal approach. Also, we show distinct characteristics of different types of words (e.g. concreteness) via computational studies. Finally, we show our proposed embedding method outperforms the baselines in the word similarity measure tasks and the hypernym prediction tasks.

2013

pdf bib
Feature Selection Using a Semantic Hierarchy for Event Recognition and Type Classification
Yoonjae Jeong | Sung-Hyon Myaeng
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2010

pdf bib
Detecting Experiences from Weblogs
Keun Chan Park | Yoonjae Jeong | Sung Hyon Myaeng
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Simplicity is Better: Revisiting Single Kernel PPI Extraction
Sung-Pil Choi | Sung-Hyon Myaeng
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

2006

pdf bib
Concept Unification of Terms in Different Languages for IR
Qing Li | Sung-Hyon Myaeng | Yun Jin | Bo-yeong Kang
Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics

1999

pdf bib
Complementing dictionary-based query translations with corpus statistics for cross-language IR
Sung Hyon Myaeng | Mung-Gil Jang
Proceedings of Machine Translation Summit VII

For cross-language information retrieval (CLIR), often queries or documents are translated into the other language to create a mono-lingual information retrieval situation. Having surveyed recent research results on translation-based CLIR, we have convinced ourselves that an effective query translation method is an essential element for a practical CLIR system with a reasonable quality. After summarizing the arguments and methods for query translation and survey results for dictionary-based translation methods, this paper describes a relatively simple yet effective method of using mutual information to handle the ambiguity problem known to be the major factor for low performance compared to mono-lingual situation. Our experimental results based on the TREC-6 collection shows that this method can achieve up to 85% of the monolingual retrieval case and 96% of the manual disambiguation case.

pdf bib
Using Mutual Information to Resolve Query Translation Ambiguities and Query Term Weighting
Myung-Gil Jang | Sung Hyon Myaeng | Se Young Park
Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics

1996

pdf bib
Extraction of Thematic Roles from Dictionary Definitions
Michael L. Mc Hale | Sung H. Myaeng
Proceedings of the 11th Pacific Asia Conference on Language, Information and Computation

1993

pdf bib
DR-LINK: Document Retrieval Using Linguistic Knowledge
Elizabeth D. Liddy | Sung H. Myaeng
Human Language Technology: Proceedings of a Workshop Held at Plainsboro, New Jersey, March 21-24, 1993

pdf bib
DR-LINK System: Phase I Summary
Elizabeth D. Liddy | Sung H. Myaeng
TIPSTER TEXT PROGRAM: PHASE I: Proceedings of a Workshop held at Fredricksburg, Virginia, September 19-23, 1993