Willy Chung


2023

pdf bib
Contrastive Learning for Inference in Dialogue
Etsuko Ishii | Yan Xu | Bryan Wilie | Ziwei Ji | Holy Lovenia | Willy Chung | Pascale Fung
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Inference, especially those derived from inductive processes, is a crucial component in our conversation to complement the information implicitly or explicitly conveyed by a speaker. While recent large language models show remarkable advances in inference tasks, their performance in inductive reasoning, where not all information is present in the context, is far behind deductive reasoning. In this paper, we analyze the behavior of the models based on the task difficulty defined by the semantic information gap – which distinguishes inductive and deductive reasoning. Our analysis reveals that the information gap between dialogue contexts and desired inferences renders the inductive inference process more challenging. To mitigate this information gap, we investigate a contrastive learning approach by feeding negative samples. Our experiments suggest negative samples help models understand what is wrong and improve their inference generations.

pdf bib
A Multitask, Multilingual, Multimodal Evaluation of ChatGPT on Reasoning, Hallucination, and Interactivity
Yejin Bang | Samuel Cahyawijaya | Nayeon Lee | Wenliang Dai | Dan Su | Bryan Wilie | Holy Lovenia | Ziwei Ji | Tiezheng Yu | Willy Chung | Quyet V. Do | Yan Xu | Pascale Fung
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
PICK: Polished & Informed Candidate Scoring for Knowledge-Grounded Dialogue Systems
Bryan Wilie | Yan Xu | Willy Chung | Samuel Cahyawijaya | Holy Lovenia | Pascale Fung
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
InstructAlign: High-and-Low Resource Language Alignment via Continual Crosslingual Instruction Tuning
Samuel Cahyawijaya | Holy Lovenia | Tiezheng Yu | Willy Chung | Pascale Fung
Proceedings of the First Workshop in South East Asian Language Processing

pdf bib
InstructTODS: Large Language Models for End-to-End Task-Oriented Dialogue Systems
Willy Chung | Samuel Cahyawijaya | Bryan Wilie | Holy Lovenia | Pascale Fung
Proceedings of the Second Workshop on Natural Language Interfaces

2022

pdf bib
Clozer”:” Adaptable Data Augmentation for Cloze-style Reading Comprehension
Holy Lovenia | Bryan Wilie | Willy Chung | Zeng Min | Samuel Cahyawijaya | Dan Su | Pascale Fung
Proceedings of the 7th Workshop on Representation Learning for NLP

Task-adaptive pre-training (TAPT) alleviates the lack of labelled data and provides performance lift by adapting unlabelled data to downstream task. Unfortunately, existing adaptations mainly involve deterministic rules that cannot generalize well. Here, we propose Clozer, a sequence-tagging based cloze answer extraction method used in TAPT that is extendable for adaptation on any cloze-style machine reading comprehension (MRC) downstream tasks. We experiment on multiple-choice cloze-style MRC tasks, and show that Clozer performs significantly better compared to the oracle and state-of-the-art in escalating TAPT effectiveness in lifting model performance, and prove that Clozer is able to recognize the gold answers independently of any heuristics.

pdf bib
Every picture tells a story: Image-grounded controllable stylistic story generation
Holy Lovenia | Bryan Wilie | Romain Barraud | Samuel Cahyawijaya | Willy Chung | Pascale Fung
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature

Generating a short story out of an image is arduous. Unlike image captioning, story generation from an image poses multiple challenges: preserving the story coherence, appropriately assessing the quality of the story, steering the generated story into a certain style, and addressing the scarcity of image-story pair reference datasets limiting supervision during training. In this work, we introduce Plug-and-Play Story Teller (PPST) and improve image-to-story generation by: 1) alleviating the data scarcity problem by incorporating large pre-trained models, namely CLIP and GPT-2, to facilitate a fluent image-to-text generation with minimal supervision, and 2) enabling a more style-relevant generation by incorporating stylistic adapters to control the story generation. We conduct image-to-story generation experiments with non-styled, romance-styled, and action-styled PPST approaches and compare our generated stories with those of previous work over three aspects, i.e., story coherence, image-story relevance, and style fitness, using both automatic and human evaluation. The results show that PPST improves story coherence and has better image-story relevance, but has yet to be adequately stylistic.