Yuhao Ye
2024
PSST: A Benchmark for Evaluation-driven Text Public-Speaking Style Transfer
Huashan Sun
|
Yixiao Wu
|
Yizhe Yang
|
Yinghao Li
|
Jiawei Li
|
Yuhao Ye
|
Yang Gao
Findings of the Association for Computational Linguistics: EMNLP 2024
Language style is necessary for AI systems to accurately understand and generate diverse human language. However, previous text style transfer primarily focused on sentence-level data-driven approaches, limiting exploration of potential problems in large language models (LLMs) and the ability to meet complex application needs. To overcome these limitations, we introduce a novel task called Public-Speaking Style Transfer (PSST), which aims to simulate humans to transform passage-level, official texts into a public-speaking style. Grounded in the analysis of real-world data from a linguistic perspective, we decompose public-speaking style into key sub-styles to pose challenges and quantify the style modeling capability of LLMs. For such intricate text style transfer, we further propose a fine-grained evaluation framework to analyze the characteristics and identify the problems of stylized texts. Comprehensive experiments suggest that current LLMs struggle to generate public speaking texts that align with human preferences, primarily due to excessive stylization and loss of semantic information. We will release our data, code, and model upon acceptance.
Fundamental Capabilities of Large Language Models and their Applications in Domain Scenarios: A Survey
Jiawei Li
|
Yizhe Yang
|
Yu Bai
|
Xiaofeng Zhou
|
Yinghao Li
|
Huashan Sun
|
Yuhang Liu
|
Xingpeng Si
|
Yuhao Ye
|
Yixiao Wu
|
林一冠 林一冠
|
Bin Xu
|
Ren Bowen
|
Chong Feng
|
Yang Gao
|
Heyan Huang
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large Language Models (LLMs) demonstrate significant value in domain-specific applications, benefiting from their fundamental capabilities. Nevertheless, it is still unclear which fundamental capabilities contribute to success in specific domains. Moreover, the existing benchmark-based evaluation cannot effectively reflect the performance of real-world applications. In this survey, we review recent advances of LLMs in domain applications, aiming to summarize the fundamental capabilities and their collaboration. Furthermore, we establish connections between fundamental capabilities and specific domains, evaluating the varying importance of different capabilities. Based on our findings, we propose a reliable strategy for domains to choose more robust backbone LLMs for real-world applications.
Search
Co-authors
- Huashan Sun 2
- Yixiao Wu 2
- Yizhe Yang 2
- Yinghao Li 2
- Jiawei Li 2
- show all...