Proceedings of the First Workshop On Transcript Understanding

Franck Dernoncourt, Thien Huu Nguyen, Viet Dac Lai, Amir Pouran Ben Veyseh, Trung H. Bui, David Seunghyun Yoon (Editors)


Anthology ID:
2022.tu-1
Month:
Oct
Year:
2022
Address:
Gyeongju, South Korea
Venue:
TU
SIG:
Publisher:
International Conference on Computational Linguistics
URL:
https://aclanthology.org/2022.tu-1
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2022.tu-1.pdf

pdf bib
Proceedings of the First Workshop On Transcript Understanding
Franck Dernoncourt | Thien Huu Nguyen | Viet Dac Lai | Amir Pouran Ben Veyseh | Trung H. Bui | David Seunghyun Yoon

pdf bib
Leveraging Non-dialogue Summaries for Dialogue Summarization
Seongmin Park | Dongchan Shin | Jihwa Lee

To mitigate the lack of diverse dialogue summarization datasets in academia, we present methods to utilize non-dialogue summarization data for enhancing dialogue summarization systems. We apply transformations to document summarization data pairs to create training data that better befit dialogue summarization. The suggested transformations also retain desirable properties of non-dialogue datasets, such as improved faithfulness to the source text. We conduct extensive experiments across both English and Korean to verify our approach. Although absolute gains in ROUGE naturally plateau as more dialogue summarization samples are introduced, utilizing non-dialogue data for training significantly improves summarization performance in zero- and few-shot settings and enhances faithfulness across all training regimes.

pdf bib
Knowledge Transfer with Visual Prompt in multi-modal Dialogue Understanding and Generation
Minjun Zhu | Yixuan Weng | Bin Li | Shizhu He | Kang Liu | Jun Zhao

Visual Dialogue (VD) task has recently received increasing attention in AI research. Visual Dialog aims to generate multi-round, interactive responses based on the dialog history and image content. Existing textual dialogue models cannot fully understand visual information, resulting in a lack of scene features when communicating with humans continuously. Therefore, how to efficiently fuse multimodal data features remains to be a challenge. In this work, we propose a knowledge transfer method with visual prompt (VPTG) fusing multi-modal data, which is a flexible module that can utilize the text-only seq2seq model to handle visual dialogue tasks. The VPTG conducts text-image co-learning and multi-modal information fusion with visual prompts and visual knowledge distillation. Specifically, we construct visual prompts from visual representations and then induce sequence-to-sequence(seq2seq) models to fuse visual information and textual contexts by visual-text patterns. And we also realize visual knowledge transfer through distillation between two different models’ text representations, so that the seq2seq model can actively learn visual semantic representations. Extensive experiments on the multi-modal dialogue understanding and generation (MDUG) datasets show the proposed VPTG outperforms other single-modal methods, which demonstrate the effectiveness of visual prompt and visual knowledge transfer.

pdf bib
Model Transfer for Event tracking as Transcript Understanding for Videos of Small Group Interaction
Sumit Agarwal | Rosanna Vitiello | Carolyn Rosé

Videos of group interactions contain a wealth of information beyond the information directly communicated in a transcript of the discussion. Tracking who has participated throughout an extended interaction and what each of their trajectories has been in relation to one another is the foundation for joint activity understanding, though it comes with some unique challenges in videos of tightly coupled group work. Motivated by insights into the properties of such scenarios, including group composition and the properties of task-oriented, goal directed tasks, we present a successful proof-of-concept. In particular, we present a transfer experiment to a dyadic robot construction task, an ablation study, and a qualitative analysis.

pdf bib
BehanceMT: A Machine Translation Corpus for Livestreaming Video Transcripts
Minh Van Nguyen | Franck Dernoncourt | Thien Nguyen

Machine translation (MT) is an important task in natural language processing, which aims to translate a sentence in a source language to another sentence with the same/similar semantics in a target language. Despite the huge effort on building MT systems for different language pairs, most previous work focuses on formal-language settings, where text to be translated come from written sources such as books and news articles. As a result, such MT systems could fail to translate livestreaming video transcripts, where text is often shorter and might be grammatically incorrect. To overcome this issue, we introduce a novel MT corpus - BehanceMT for livestreaming video transcript translation. Our corpus contains parallel transcripts for 3 language pairs, where English is the source language and Spanish, Chinese, and Arabic are the target languages. Experimental results show that finetuning a pretrained MT model on BehanceMT significantly improves the performance of the model in translating video transcripts across 3 language pairs. In addition, the finetuned MT model outperforms GoogleTranslate in 2 out of 3 language pairs, further demonstrating the usefulness of our proposed dataset for video transcript translation. BehanceMT will be publicly released upon the acceptance of the paper.

pdf bib
Investigating the Impact of ASR Errors on Spoken Implicit Discourse Relation Recognition
Linh The Nguyen | Dat Quoc Nguyen

We present an empirical study investigating the influence of automatic speech recognition (ASR) errors on the spoken implicit discourse relation recognition (IDRR) task. We construct a spoken dataset for this task based on the Penn Discourse Treebank 2.0. On this dataset, we conduct “Cascaded” experiments employing state-of-the-art ASR and text-based IDRR models and find that the ASR errors significantly decrease the IDRR performance. In addition, the “Cascaded” approach does remarkably better than an “End-to-End” one that directly predicts a relation label for each input argument speech pair.