Jiyoung Han


2024

pdf bib
XFACT Team0331 at PerspectiveArg2024: Sampling from Bounded Clusters for Diverse Relevant Argument Retrieval
Wan Ju Kang | Jiyoung Han | Jaemin Jung | James Thorne
Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024)

This paper reports on the argument mining system submitted to the ArgMining workshop 2024 for The Perspective Argument Retrieval Shared Task (Falk et al., 2024). We com- bine the strengths of a smaller Sentence BERT model and a Large Language Model: the for- mer is fine-tuned for a contrastive embedding objective and a classification objective whereas the latter is invoked to augment the query and populate the latent space with diverse relevant arguments. We conduct an ablation study on these components to find that each contributes substantially to the diversity and relevance cri- teria for the top-k retrieval of arguments from the given corpus.

2023

pdf bib
Detecting Contextomized Quotes in News Headlines by Contrastive Learning
Seonyeong Song | Hyeonho Song | Kunwoo Park | Jiyoung Han | Meeyoung Cha
Findings of the Association for Computational Linguistics: EACL 2023

Quotes are critical for establishing credibility in news articles. A direct quote enclosed in quotation marks has a strong visual appeal and is a sign of a reliable citation. Unfortunately, this journalistic practice is not strictly followed, and a quote in the headline is often “contextomized.” Such a quote uses words out of context in a way that alters the speaker’s intention so that there is no semantically matching quote in the body text. We present QuoteCSE, a contrastive learning framework that represents the embedding of news quotes based on domain-driven positive and negative samples to identify such an editorial strategy. The dataset and code are available at https://github.com/ssu-humane/contextomized-quote-contrastive.

pdf bib
Disentangling Structure and Style: Political Bias Detection in News by Inducing Document Hierarchy
Jiwoo Hong | Yejin Cho | Jiyoung Han | Jaemin Jung | James Thorne
Findings of the Association for Computational Linguistics: EMNLP 2023

We address an important gap in detecting political bias in news articles. Previous works that perform document classification can be influenced by the writing style of each news outlet, leading to overfitting and limited generalizability. Our approach overcomes this limitation by considering both the sentence-level semantics and the document-level rhetorical structure, resulting in a more robust and style-agnostic approach to detecting political bias in news articles. We introduce a novel multi-head hierarchical attention model that effectively encodes the structure of long documents through a diverse ensemble of attention heads. While journalism follows a formalized rhetorical structure, the writing style may vary by news outlet. We demonstrate that our method overcomes this domain dependency and outperforms previous approaches for robustness and accuracy. Further analysis and human evaluation demonstrate the ability of our model to capture common discourse structures in journalism.

2019

pdf bib
The Fallacy of Echo Chambers: Analyzing the Political Slants of User-Generated News Comments in Korean Media
Jiyoung Han | Youngin Lee | Junbum Lee | Meeyoung Cha
Proceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)

This study analyzes the political slants of user comments on Korean partisan media. We built a BERT-based classifier to detect political leaning of short comments via the use of semi-unsupervised deep learning methods that produced an F1 score of 0.83. As a result of classifying 21.6K comments, we found the high presence of conservative bias on both conservative and liberal news outlets. Moreover, this study discloses an asymmetry across the partisan spectrum in that more liberals (48.0%) than conservatives (23.6%) comment not only on news stories resonating with their political perspectives but also on those challenging their viewpoints. These findings advance the current understanding of online echo chambers.