Zebin Ou


2023

pdf bib
GEMINI: Controlling The Sentence-Level Summary Style in Abstractive Text Summarization
Guangsheng Bao | Zebin Ou | Yue Zhang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Human experts write summaries using different techniques, including extracting a sentence from the document and rewriting it, or fusing various information from the document to abstract it. These techniques are flexible and thus difficult to be imitated by any single method. To address this issue, we propose an adaptive model, GEMINI, that integrates a rewriter and a generator to mimic the sentence rewriting and abstracting techniques, respectively. GEMINI adaptively chooses to rewrite a specific document sentence or generate a summary sentence from scratch. Experiments demonstrate that our adaptive approach outperforms the pure abstractive and rewriting baselines on three benchmark datasets, achieving the best results on WikiHow. Interestingly, empirical results show that the human summary styles of summary sentences are consistently predictable given their context. We release our code and model at https://github.com/baoguangsheng/gemini.

2022

pdf bib
On the Role of Pre-trained Language Models in Word Ordering: A Case Study with BART
Zebin Ou | Meishan Zhang | Yue Zhang
Proceedings of the 29th International Conference on Computational Linguistics

Word ordering is a constrained language generation task taking unordered words as input. Existing work uses linear models and neural networks for the task, yet pre-trained language models have not been studied in word ordering, let alone why they help. We use BART as an instance and show its effectiveness in the task. To explain why BART helps word ordering, we extend analysis with probing and empirically identify that syntactic dependency knowledge in BART is a reliable explanation. We also report performance gains with BART in the related partial tree linearization task, which readily extends our analysis.