Hyeju Jang


2024

pdf bib
Improving Multi-hop Logical Reasoning in Knowledge Graphs with Context-Aware Query Representation Learning
Jeonghoon Kim | Heesoo Jung | Hyeju Jang | Hogun Park
Findings of the Association for Computational Linguistics: ACL 2024

Multi-hop logical reasoning on knowledge graphs is a pivotal task in natural language processing, with numerous approaches aiming to answer First-Order Logic (FOL) queries. Recent geometry (e.g., box, cone) and probability (e.g., beta distribution)-based methodologies have effectively addressed complex FOL queries. However, a common challenge across these methods lies in determining accurate geometric bounds or probability parameters for these queries. The challenge arises because existing methods rely on linear sequential operations within their computation graphs, overlooking the logical structure of the query and the relation-induced information that can be gleaned from the relations of the query, which we call the context of the query. To address the problem, we propose a model-agnostic methodology that enhances the effectiveness of existing multi-hop logical reasoning approaches by fully integrating the context of the FOL query graph. Our approach distinctively discerns (1) the structural context inherent to the query structure and (2) the relation-induced context unique to each node in the query graph as delineated in the corresponding knowledge graph. This dual-context paradigm helps nodes within a query graph attain refined internal representations throughout the multi-hop reasoning steps. Through experiments on two datasets, our method consistently enhances the three multi-hop reasoning foundation models, achieving performance improvements of up to 19.5%. Our codes are available at https://github.com/kjh9503/caqr.

pdf bib
Halu-NLP at SemEval-2024 Task 6: MetaCheckGPT - A Multi-task Hallucination Detection using LLM uncertainty and meta-models
Rahul Mehta | Andrew Hoblitzell | Jack O’keefe | Hyeju Jang | Vasudeva Varma
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)

Hallucinations in large language models(LLMs) have recently become a significantproblem. A recent effort in this directionis a shared task at Semeval 2024 Task 6,SHROOM, a Shared-task on Hallucinationsand Related Observable Overgeneration Mis-takes. This paper describes our winning so-lution ranked 1st and 2nd in the 2 sub-tasksof model agnostic and model aware tracks re-spectively. We propose a meta-regressor basedensemble of LLMs based on a random forestalgorithm that achieves the highest scores onthe leader board. We also experiment with var-ious transformer based models and black boxmethods like ChatGPT, Vectara, and others. Inaddition, we perform an error analysis com-paring ChatGPT against our best model whichshows the limitations of the former

2023

pdf bib
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Student Research Workshop
Dongfang Li | Rahmad Mahendra | Zilu Peter Tang | Hyeju Jang | Yugo Murawaki | Derek Fai Wong
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics: Student Research Workshop

2021

pdf bib
KW-ATTN: Knowledge Infused Attention for Accurate and Interpretable Text Classification
Hyeju Jang | Seojin Bang | Wen Xiao | Giuseppe Carenini | Raymond Ng | Young ji Lee
Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures

Text classification has wide-ranging applications in various domains. While neural network approaches have drastically advanced performance in text classification, they tend to be powered by a large amount of training data, and interpretability is often an issue. As a step towards better accuracy and interpretability especially on small data, in this paper we present a new knowledge-infused attention mechanism, called KW-ATTN (KnoWledge-infused ATTentioN) to incorporate high-level concepts from external knowledge bases into Neural Network models. We show that KW-ATTN outperforms baseline models using only words as well as other approaches using concepts by classification accuracy, which indicates that high-level concepts help model prediction. Furthermore, crowdsourced human evaluation suggests that additional concept information helps interpretability of the model.

pdf bib
T3-Vis: visual analytic for Training and fine-Tuning Transformers in NLP
Raymond Li | Wen Xiao | Lanjun Wang | Hyeju Jang | Giuseppe Carenini
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations

Transformers are the dominant architecture in NLP, but their training and fine-tuning is still very challenging. In this paper, we present the design and implementation of a visual analytic framework for assisting researchers in such process, by providing them with valuable insights about the model’s intrinsic properties and behaviours. Our framework offers an intuitive overview that allows the user to explore different facets of the model (e.g., hidden states, attention) through interactive visualization, and allows a suite of built-in algorithms that compute the importance of model components and different parts of the input sequence. Case studies and feedback from a user focus group indicate that the framework is useful, and suggest several improvements. Our framework is available at: https://github.com/raymondzmc/T3-Vis.

2020

pdf bib
Stigma Annotation Scheme and Stigmatized Language Detection in Health-Care Discussions on Social Media
Nadiya Straton | Hyeju Jang | Raymond Ng
Proceedings of the Twelfth Language Resources and Evaluation Conference

Much research has been done within the social sciences on the interpretation and influence of stigma on human behaviour and health, which result in out-of-group exclusion, distancing, cognitive separation, status loss, discrimination, in-group pressure, and often lead to disengagement, non-adherence to treatment plan, and prescriptions by the doctor. However, little work has been conducted on computational identification of stigma in general and in social media discourse in particular. In this paper, we develop the annotation scheme and improve the annotation process for stigma identification, which can be applied to other health-care domains. The data from pro-vaccination and anti-vaccination discussion groups are annotated by trained annotators who have professional background in social science and health-care studies, therefore the group can be considered experts on the subject in comparison to non-expert crowd. Amazon MTurk annotators is another group of annotator with no knowledge on their education background, they are initially treated as non-expert crowd on the subject matter of stigma. We analyze the annotations with visualisation techniques, features from LIWC (Linguistic Inquiry and Word Count) list and make prediction based on bi-grams with traditional and deep learning models. Data augmentation method and application of CNN show high performance accuracy in comparison to other models. Success of the rigorous annotation process on identifying stigma is reconfirmed by achieving high prediction rate with CNN.

pdf bib
Exploratory Analysis of COVID-19 Related Tweets in North America to Inform Public Health Institutes
Hyeju Jang | Emily Rempel | Giuseppe Carenini | Naveed Janjua
Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020

Social media is a rich source where we can learn about people’s reactions to social issues. As COVID-19 has significantly impacted on people’s lives, it is essential to capture how people react to public health interventions and understand their concerns. In this paper, we aim to investigate people’s reactions and concerns about COVID-19 in North America, especially focusing on Canada. We analyze COVID-19 related tweets using topic modeling and aspect-based sentiment analysis, and interpret the results with public health experts. We compare timeline of topics discussed with timing of implementation of public health interventions for COVID-19. We also examine people’s sentiment about COVID-19 related issues. We discuss how the results can be helpful for public health agencies when designing a policy for new interventions. Our work shows how Natural Language Processing (NLP) techniques could be applied to public health questions with domain expert involvement.

2017

pdf bib
Modeling Dialogue Acts with Content Word Filtering and Speaker Preferences
Yohan Jo | Michael Yoder | Hyeju Jang | Carolyn Rosé
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We present an unsupervised model of dialogue act sequences in conversation. By modeling topical themes as transitioning more slowly than dialogue acts in conversation, our model de-emphasizes content-related words in order to focus on conversational function words that signal dialogue acts. We also incorporate speaker tendencies to use some acts more than others as an additional predictor of dialogue act prevalence beyond temporal dependencies. According to the evaluation presented on two dissimilar corpora, the CNET forum and NPS Chat corpus, the effectiveness of each modeling assumption is found to vary depending on characteristics of the data. De-emphasizing content-related words yields improvement on the CNET corpus, while utilizing speaker tendencies is advantageous on the NPS corpus. The components of our model complement one another to achieve robust performance on both corpora and outperform state-of-the-art baseline models.

pdf bib
Finding Structure in Figurative Language: Metaphor Detection with Topic-based Frames
Hyeju Jang | Keith Maki | Eduard Hovy | Carolyn Rosé
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue

In this paper, we present a novel and highly effective method for induction and application of metaphor frame templates as a step toward detecting metaphor in extended discourse. We infer implicit facets of a given metaphor frame using a semi-supervised bootstrapping approach on an unlabeled corpus. Our model applies this frame facet information to metaphor detection, and achieves the state-of-the-art performance on a social media dataset when building upon other proven features in a nonlinear machine learning model. In addition, we illustrate the mechanism through which the frame and topic information enable the more accurate metaphor detection.

2016

pdf bib
Metaphor Detection with Topic Transition, Emotion and Cognition in Context
Hyeju Jang | Yohan Jo | Qinlan Shen | Michael Miller | Seungwhan Moon | Carolyn Rosé
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2015

pdf bib
Effects of Situational Factors on Metaphor Detection in an Online Discussion Forum
Hyeju Jang | Miaomiao Wen | Carolyn Rosé
Proceedings of the Third Workshop on Metaphor in NLP

pdf bib
Metaphor Detection in Discourse
Hyeju Jang | Seungwhan Moon | Yohan Jo | Carolyn Rosé
Proceedings of the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2014

pdf bib
Conversational Metaphors in Use: Exploring the Contrast between Technical and Everyday Notions of Metaphor
Hyeju Jang | Mario Piergallini | Miaomiao Wen | Carolyn Rosé
Proceedings of the Second Workshop on Metaphor in NLP

2013

pdf bib
Extracting Events with Informal Temporal References in Personal Histories in Online Communities
Miaomiao Wen | Zeyu Zheng | Hyeju Jang | Guang Xiang | Carolyn Penstein Rosé
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

2012

pdf bib
Generating Diagnostic Multiple Choice Comprehension Cloze Questions
Jack Mostow | Hyeju Jang
Proceedings of the Seventh Workshop on Building Educational Applications Using NLP

pdf bib
Inferring Selectional Preferences from Part-Of-Speech N-grams
Hyeju Jang | Jack Mostow
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics