Zijie Wang


2024

pdf bib
Interpreting Answers to Yes-No Questions in Dialogues from Multiple Domains
Zijie Wang | Farzana Rashid | Eduardo Blanco
Findings of the Association for Computational Linguistics: NAACL 2024

People often answer yes-no questions without explicitly saying yes, no, or similar polar key-words. Figuring out the meaning of indirectanswers is challenging, even for large language models. In this paper, we investigate this problem working with dialogues from multiple domains. We present new benchmarks in three diverse domains: movie scripts, tennis interviews, and airline customer service. We present an approach grounded on distant supervision and blended training to quickly adapt to a new dialogue domain. Experimental results show that our approach is never detrimental and yields F1 improvements as high as 11-34%.

2023

pdf bib
Interpreting Indirect Answers to Yes-No Questions in Multiple Languages
Zijie Wang | Md Hossain | Shivam Mathur | Terry Melo | Kadir Ozler | Keun Park | Jacob Quintero | MohammadHossein Rezaei | Shreya Shakya | Md Uddin | Eduardo Blanco
Findings of the Association for Computational Linguistics: EMNLP 2023

Yes-no questions expect a yes or no for an answer, but people often skip polar keywords. Instead, they answer with long explanations that must be interpreted. In this paper, we focus on this challenging problem and release new benchmarks in eight languages. We present a distant supervision approach to collect training data, and demonstrate that direct answers (i.e., with polar keywords) are useful to train models to interpret indirect answers (i.e., without polar keywords). We show that monolingual fine-tuning is beneficial if training data can be obtained via distant supervision for the language of interest (5 languages). Additionally, we show that cross-lingual fine-tuning is always beneficial (8 languages).