Samuel Osebe
2024
Towards Multi-Modal Co-Reference Resolution in Conversational Shopping Agents
Samuel Osebe
|
Prashan Wanigasekara
|
Thomas Gueudre
|
Thanh Tran
|
Rahul Sharma
|
Fan Yang
|
Qian Hu
|
Weitong Ruan
|
Emre Barut
|
Chengwei Su
Proceedings of the Seventh Workshop on e-Commerce and NLP @ LREC-COLING 2024
The context of modern smart voice assistants is often multi-modal, where images, audio and video content are consumed by users simultaneously. In such a setup, co-reference resolution is especially challenging, and runs across modalities and dialogue turns. We explore the problem of multi-modal co-reference resolution in multi-turn dialogues and quantify the performance of multi-modal LLMs on a specially curated dataset of long, image-interleaved conversations between a voice assistant and human in a shopping use case. We propose a custom architecture for multi-modal embedding alignment using a novel parameter augmentation technique. Our proposed Parameter Augmented LLM approach shows a 4.9% absolute F1 improvement above a cross-attention baseline while reducing the number of parameters being trained by 4x.
2023
UMASS_BioNLP at MEDIQA-Chat 2023: Can LLMs generate high-quality synthetic note-oriented doctor-patient conversations?
Junda Wang
|
Zonghai Yao
|
Avijit Mitra
|
Samuel Osebe
|
Zhichao Yang
|
Hong Yu
Proceedings of the 5th Clinical Natural Language Processing Workshop
This paper presents UMASS_BioNLP team participation in the MEDIQA-Chat 2023 shared task for Task-A and Task-C. We focus especially on Task-C and propose a novel LLMs cooperation system named a doctor-patient loop to generate high-quality conversation data sets. The experiment results demonstrate that our approaches yield reasonable performance as evaluated by automatic metrics such as ROUGE, medical concept recall, BLEU, and Self-BLEU. Furthermore, we conducted a comparative analysis between our proposed method and ChatGPT and GPT-4. This analysis also investigates the potential of utilizing cooperation LLMs to generate high-quality datasets.
Search
Co-authors
- Junda Wang 1
- Zonghai Yao 1
- Avijit Mitra 1
- Zhichao Yang 1
- Hong Yu 1
- show all...