Robinson Piramuthu
2025
MDSEval: A Meta-Evaluation Benchmark for Multimodal Dialogue Summarization
Yinhong Liu
|
Jianfeng He
|
Hang Su
|
Ruixue Lian
|
Yi Nian
|
Jake W. Vincent
|
Srikanth Vishnubhotla
|
Robinson Piramuthu
|
Saab Mansour
Findings of the Association for Computational Linguistics: EMNLP 2025
Multimodal Dialogue Summarization (MDS) is a critical task with wide-ranging applications. To support the development of effective MDS models, robust automatic evaluation methods are essential for reducing both cost and human effort. However, such methods require a strong meta-evaluation benchmark grounded in human annotations. In this work, we introduce MDSEval, the first meta-evaluation benchmark for MDS, consisting image-sharing dialogues, corresponding summaries, and human judgments across eight well-defined quality aspects. To ensure data quality and richfulness, we propose a novel filtering framework leveraging Mutually Exclusive Key Information (MEKI) across modalities. Our work is the first to identify and formalize key evaluation dimensions specific to MDS. Finally, we benchmark state-of-the-art modal evaluation methods, revealing their limitations in distinguishing summaries from advanced MLLMs and their susceptibility to various bias.
2022
VISITRON: Visual Semantics-Aligned Interactively Trained Object-Navigator
Ayush Shrivastava
|
Karthik Gopalakrishnan
|
Yang Liu
|
Robinson Piramuthu
|
Gokhan Tur
|
Devi Parikh
|
Dilek Hakkani-Tur
Findings of the Association for Computational Linguistics: ACL 2022
Interactive robots navigating photo-realistic environments need to be trained to effectively leverage and handle the dynamic nature of dialogue in addition to the challenges underlying vision-and-language navigation (VLN). In this paper, we present VISITRON, a multi-modal Transformer-based navigator better suited to the interactive regime inherent to Cooperative Vision-and-Dialog Navigation (CVDN). VISITRON is trained to: i) identify and associate object-level concepts and semantics between the environment and dialogue history, ii) identify when to interact vs. navigate via imitation learning of a binary classification head. We perform extensive pre-training and fine-tuning ablations with VISITRON to gain empirical insights and improve performance on CVDN. VISITRON’s ability to identify when to interact leads to a natural generalization of the game-play mode introduced by Roman et al. (2020) for enabling the use of such models in different environments. VISITRON is competitive with models on the static CVDN leaderboard and attains state-of-the-art performance on the Success weighted by Path Length (SPL) metric.
Search
Fix author
Co-authors
- Karthik Gopalakrishnan 1
- Dilek Hakkani-Tur 1
- Jianfeng He 1
- Ruixue Lian 1
- Yang Liu (刘扬) 1
- show all...