AlignCap: Aligning Speech Emotion Captioning to Human Preferences

Ziqi Liang, Haoxiang Shi, Hanhui Chen


Abstract
Speech Emotion Captioning (SEC) has gradually become an active research task. The emotional content conveyed through human speech are often complex, and classifying them into fixed categories may not be enough to fully capture speech emotions. Describing speech emotions through natural language may be a more effective approach. However, existing SEC methods often produce hallucinations and lose generalization on unseen speech. To overcome these problems, we propose AlignCap, which Aligning Speech Emotion Captioning to Human Preferences based on large language model (LLM) with two properties: 1) Speech-Text Alignment, which minimizing the divergence between the LLM’s response prediction distributions for speech and text inputs using knowledge distillation (KD) Regularization. 2) Human Preference Alignment, where we design Preference Optimization (PO) Regularization to eliminate factuality and faithfulness hallucinations. We also extract emotional clues as a prompt for enriching fine-grained information under KD-Regularization. Experiments demonstrate that AlignCap presents stronger performance to other state-of-the-art methods on Zero-shot SEC task.
Anthology ID:
2024.emnlp-main.224
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3837–3846
Language:
URL:
https://aclanthology.org/2024.emnlp-main.224
DOI:
Bibkey:
Cite (ACL):
Ziqi Liang, Haoxiang Shi, and Hanhui Chen. 2024. AlignCap: Aligning Speech Emotion Captioning to Human Preferences. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3837–3846, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
AlignCap: Aligning Speech Emotion Captioning to Human Preferences (Liang et al., EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.224.pdf