Chenhan Yuan


2023

pdf bib
Zero-shot Temporal Relation Extraction with ChatGPT
Chenhan Yuan | Qianqian Xie | Sophia Ananiadou
The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks

The goal of temporal relation extraction is to infer the temporal relation between two events in the document. Supervised models are dominant in this task. In this work, we investigate ChatGPT’s ability on zero-shot temporal relation extraction. We designed three different prompt techniques to break down the task and evaluate ChatGPT. Our experiments show that ChatGPT’s performance has a large gap with that of supervised methods and can heavily rely on the design of prompts. We further demonstrate that ChatGPT can infer more small relation classes correctly than supervised methods. The current shortcomings of ChatGPT on temporal relation extraction are also discussed in this paper. We found that ChatGPT cannot keep consistency during temporal inference and it fails in actively long-dependency temporal inference.

2021

pdf bib
Unsupervised Relation Extraction: A Variational Autoencoder Approach
Chenhan Yuan | Hoda Eldardiry
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Unsupervised relation extraction works by clustering entity pairs that have the same relations in the text. Some existing variational autoencoder (VAE)-based approaches train the relation extraction model as an encoder that generates relation classifications. A decoder is trained along with the encoder to reconstruct the encoder input based on the encoder-generated relation classifications. These classifications are a latent variable so they are required to follow a pre-defined prior distribution which results in unstable training. We propose a VAE-based unsupervised relation extraction technique that overcomes this limitation by using the classifications as an intermediate variable instead of a latent variable. Specifically, classifications are conditioned on sentence input, while the latent variable is conditioned on both the classifications and the sentence input. This allows our model to connect the decoder with the encoder without putting restrictions on the classification distribution; which improves training stability. Our approach is evaluated on the NYT dataset and outperforms state-of-the-art methods.

2019

pdf bib
Efficient text generation of user-defined topic using generative adversarial networks
Chenhan Yuan | Yi-Chin Huang | Cheng-Hung Tsai
Proceedings of the 4th Workshop on Computational Creativity in Language Generation