Yanjie Liang
2025
PresentAgent: Multimodal Agent for Presentation Video Generation
Jingwei Shi
|
Zeyu Zhang
|
Biao Wu
|
Yanjie Liang
|
Meng Fang
|
Ling Chen
|
Yang Zhao
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
We present PresentAgent, a multimodal agent that transforms long-form documents into narrated presentation videos. While existing approaches are limited to generating static slides or text summaries, our method advances beyond these limitations by producing fully synchronized visual and spoken content that closely mimics human-style presentations. To achieve this integration, PresentAgent employs a modular pipeline that systematically segments the input document, plans and renders slide-style visual frames, generates contextual spoken narration with large language models and Text-to-Speech models, and seamlessly composes the final video with precise audio-visual alignment. Given the complexity of evaluating such multimodal outputs, we introduce PresentEval, a unified assessment framework powered by Vision-Language Models that comprehensively scores videos across three critical dimensions: content fidelity, visual clarity, and audience comprehension through prompt-based evaluation. Our experimental validation on a curated dataset of 30 document–presentation pairs demonstrates that PresentAgent approaches human-level quality across all evaluation metrics. These results highlight the significant potential of controllable multimodal agents in transforming static textual materials into dynamic, effective, and accessible presentation formats.
Inducing Argument Facets for Faithful Opinion Summarization
Jian Wang
|
Yanjie Liang
|
Yuqing Sun
|
Bin Gong
Findings of the Association for Computational Linguistics: EMNLP 2025
Faithful opinion summarization task refers to generating a summary for a set of documents that covers the majority and minority opinions in documents. Inspired by the cognitive science that argument facet is the focus of an opinion, we propose the facets-guided opinion summarization method (FacSum). By inducing the facets, we partition the documents into multiple facet-specific sets. Then key phrases are extracted as the representatives of each set and the number of facets is used for constraining the length of summary, both of which are used to guide large language models (LLMs) to cover different argument facets of opinions while keeping the summary concise. We perform experiments on two representative datasets and the results show that our method outperforms the state-of-the-art (SOTA) methods and multiple LLMs. The ablation studies indicate that the introduced facets contribute to improving model performance by enabling the coverage of minority opinions while preserving the majority ones. The results based on different LLMs demonstrate that our method can improve the performance of LLMs with varying model sizes. We apply FacSum to the summarization of professional paper reviews, and the results confirm its effectiveness in specialty domains as well.
Search
Fix author
Co-authors
- Ling Chen 1
- Meng Fang 1
- Bin Gong (龚斌) 1
- Jingwei Shi 1
- Yuqing Sun (孙宇清) 1
- show all...