Xiaoye Ouyang


2025

pdf bib
Collaborative Document Simplification Using Multi-Agent Systems
Dengzhao Fang | Jipeng Qiang | Xiaoye Ouyang | Yi Zhu | Yunhao Yuan | Yun Li
Proceedings of the 31st International Conference on Computational Linguistics

Research on text simplification has been ongoing for many years. However, the task of document simplification (DS) remains a significant challenge due to the need to consider complex factors such as technical terminology, metaphors, and overall coherence. In this work, we introduce a novel multi-agent framework for document simplification (AgentSimp) based on large language models (LLMs). This framework emulates the collaborative process of a human expert team through the roles played by multiple agents, addressing the intricate demands of document simplification. We explore two communication strategies among agents (pipeline-style and synchronous) and two document reconstruction strategies (Direct and Iterative ). According to both automatic evaluation metrics and human evaluation results, the documents simplified by AgentSimp are deemed to be more thoroughly simplified and more coherent on a variety of articles across different types and styles.

pdf bib
Post-Hoc Watermarking for Robust Detection in Text Generated by Large Language Models
Jifei Hao | Jipeng Qiang | Yi Zhu | Yun Li | Yunhao Yuan | Xiaoye Ouyang
Proceedings of the 31st International Conference on Computational Linguistics

Research on text simplification has been ongoing for many years, yet document simplification remains a significant challenge due to the need to address complex factors such as technical terminology, metaphors, and overall coherence. In this work, we introduce a novel multi-agent framework AgentSimp for document simplification, based on large language models. This framework simulates the collaborative efforts of a team of human experts through the roles played by multiple agents, effectively meeting the intricate demands of document simplification. We investigate two communication strategies among agents (pipeline-style and synchronous) and two document reconstruction strategies (Direct and Iterative). According to both automatic evaluation metrics and human evaluation results, AgentSimp produces simplified documents that are more thoroughly simplified and more coherent across various articles and styles.

2023

pdf bib
Chinese Lexical Substitution: Dataset and Method
Jipeng Qiang | Kang Liu | Ying Li | Yun Li | Yi Zhu | Yun-Hao Yuan | Xiaocheng Hu | Xiaoye Ouyang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Existing lexical substitution (LS) benchmarks were collected by asking human annotators to think of substitutes from memory, resulting in benchmarks with limited coverage and relatively small scales. To overcome this problem, we propose a novel annotation method to construct an LS dataset based on human and machine collaboration. Based on our annotation method, we construct the first Chinese LS dataset CHNLS which consists of 33,695 instances and 144,708 substitutes, covering three text genres (News, Novel, and Wikipedia). Specifically, we first combine four unsupervised LS methods as an ensemble method to generate the candidate substitutes, and then let human annotators judge these candidates or add new ones. This collaborative process combines the diversity of machine-generated substitutes with the expertise of human annotators. Experimental results that the ensemble method outperforms other LS methods. To our best knowledge, this is the first study for the Chinese LS task.