2024
pdf
bib
abs
XrayGPT: Chest Radiographs Summarization using Large Medical Vision-Language Models
Omkar Chakradhar Thawakar
|
Abdelrahman M. Shaker
|
Sahal Shaji Mullappilly
|
Hisham Cholakkal
|
Rao Muhammad Anwer
|
Salman Khan
|
Jorma Laaksonen
|
Fahad Khan
Proceedings of the 23rd Workshop on Biomedical Natural Language Processing
The latest breakthroughs in large language models (LLMs) and vision-language models (VLMs) have showcased promising capabilities toward performing a wide range of tasks. Such models are typically trained on massive datasets comprising billions of image-text pairs with diverse tasks. However, their performance on task-specific domains, such as radiology, is still under-explored. While few works have recently explored LLMs-based conversational medical models, they mainly focus on text-based analysis. In this paper, we introduce XrayGPT, a conversational medical vision-language (VLMs) model that can analyze and answer open-ended questions about chest radiographs. Specifically, we align both medical visual encoder with a fine-tuned LLM to possess visual conversation abilities, grounded in an understanding of radiographs and medical knowledge. For improved alignment of chest radiograph data, we generate ~217k interactive and high-quality summaries from free-text radiology reports. Extensive experiments are conducted to validate the merits of XrayGPT. To conduct an expert evaluation, certified medical doctors evaluated the output of our XrayGPT on a test subset and the results reveal that more than 70% of the responses are scientifically accurate, with an average score of 4/5. We hope our simple and effective method establishes a solid baseline, facilitating future research toward automated analysis and summarization of chest radiographs. Code, models, and instruction sets will be publicly released.
pdf
bib
abs
Text-to-Multimodal Retrieval with Bimodal Input Fusion in Shared Cross-Modal Transformer
Pranav Arora
|
Selen Pehlivan
|
Jorma Laaksonen
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
The rapid proliferation of multimedia content has necessitated the development of effective multimodal video retrieval systems. Multimodal video retrieval is a non-trivial task involving retrieval of relevant information across different modalities, such as text, audio, and visual. This work aims to improve multimodal retrieval by guiding the creation of a shared embedding space with task-specific contrastive loss functions. An important aspect of our work is to propose a model that learns retrieval cues for the textual query from multiple modalities both separately and jointly within a hierarchical architecture that can be flexibly extended and fine-tuned for any number of modalities. To this end, the loss functions and the architectural design of the model are developed with a strong focus on increasing the mutual information between the textual and cross-modal representations. The proposed approach is quantitatively evaluated on the MSR-VTT and YouCook2 text-to-video retrieval benchmark datasets. The results showcase that the approach not only holds its own against state-of-the-art methods, but also outperforms them in a number of scenarios, with a notable relative improvements from baseline in R@1, R@5 and R@10 metrics.
2022
pdf
bib
abs
CLIP4IDC: CLIP for Image Difference Captioning
Zixin Guo
|
Tzu-Jui Wang
|
Jorma Laaksonen
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Image Difference Captioning (IDC) aims at generating sentences to describe differences between two similar-looking images. Conventional approaches learn an IDC model with a pre-trained and usually frozen visual feature extractor. Accordingly, two major issues may arise: (1) a large domain gap usually exists between the pre-training datasets used for training such a visual encoder and that of the downstream IDC task, and (2) the visual feature extractor, when separately encoding two images, often does not effectively encode the visual changes between two images. Due to the excellent zero-shot performance of the recently proposed CLIP, we thus propose CLIP4IDC to transfer a CLIP model for the IDC task to address those issues. Different from directly fine-tuning CLIP to generate sentences, we introduce an adaptation training process to adapt CLIP’s visual encoder to capture and align differences in image pairs based on the textual descriptions. Experiments on three IDC benchmark datasets, CLEVR-Change, Spot-the-Diff, and Image-Editing-Request, demonstrate the effectiveness of CLIP4IDC.
pdf
bib
abs
When to Laugh and How Hard? A Multimodal Approach to Detecting Humor and Its Intensity
Khalid Alnajjar
|
Mika Hämäläinen
|
Jörg Tiedemann
|
Jorma Laaksonen
|
Mikko Kurimo
Proceedings of the 29th International Conference on Computational Linguistics
Prerecorded laughter accompanying dialog in comedy TV shows encourages the audience to laugh by clearly marking humorous moments in the show. We present an approach for automatically detecting humor in the Friends TV show using multimodal data. Our model is capable of recognizing whether an utterance is humorous or not and assess the intensity of it. We use the prerecorded laughter in the show as annotation as it marks humor and the length of the audience’s laughter tells us how funny a given joke is. We evaluate the model on episodes the model has not been exposed to during the training phase. Our results show that the model is capable of correctly detecting whether an utterance is humorous 78% of the time and how long the audience’s laughter reaction should last with a mean absolute error of 600 milliseconds.
2018
pdf
bib
abs
The MeMAD Submission to the WMT18 Multimodal Translation Task
Stig-Arne Grönroos
|
Benoit Huet
|
Mikko Kurimo
|
Jorma Laaksonen
|
Bernard Merialdo
|
Phu Pham
|
Mats Sjöberg
|
Umut Sulubacak
|
Jörg Tiedemann
|
Raphael Troncy
|
Raúl Vázquez
Proceedings of the Third Conference on Machine Translation: Shared Task Papers
This paper describes the MeMAD project entry to the WMT Multimodal Machine Translation Shared Task. We propose adapting the Transformer neural machine translation (NMT) architecture to a multi-modal setting. In this paper, we also describe the preliminary experiments with text-only translation systems leading us up to this choice. We have the top scoring system for both English-to-German and English-to-French, according to the automatic metrics for flickr18. Our experiments show that the effect of the visual features in our system is small. Our largest gains come from the quality of the underlying text-only NMT system. We find that appropriate use of additional data is effective.
2015
pdf
bib
Towards Reliable Automatic Multimodal Content Analysis
Olli-Philippe Lautenbacher
|
Liisa Tiittula
|
Maija Hirvonen
|
Jorma Laaksonen
|
Mikko Kurimo
Proceedings of the Fourth Workshop on Vision and Language
2014
pdf
bib
abs
SLMotion - An extensible sign language oriented video analysis tool
Matti Karppa
|
Ville Viitaniemi
|
Marcos Luzardo
|
Jorma Laaksonen
|
Tommi Jantunen
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
We present a software toolkit called SLMotion which provides a framework for automatic and semiautomatic analysis, feature extraction and annotation of individual sign language videos, and which can easily be adapted to batch processing of entire sign language corpora. The program follows a modular design, and exposes a Numpy-compatible Python application programming interface that makes it easy and convenient to extend its functionality through scripting. The program includes support for exporting the annotations in ELAN format. The program is released as free software, and is available for GNU/Linux and MacOS platforms.
pdf
bib
abs
S-pot - a benchmark in spotting signs within continuous signing
Ville Viitaniemi
|
Tommi Jantunen
|
Leena Savolainen
|
Matti Karppa
|
Jorma Laaksonen
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
In this paper we present S-pot, a benchmark setting for evaluating the performance of automatic spotting of signs in continuous sign language videos. The benchmark includes 5539 video files of Finnish Sign Language, ground truth sign spotting results, a tool for assessing the spottings against the ground truth, and a repository for storing information on the results. In addition we will make our sign detection system and results made with it publicly available as a baseline for comparison and further developments.
2012
pdf
bib
abs
Comparing computer vision analysis of signed language video with motion capture recordings
Matti Karppa
|
Tommi Jantunen
|
Ville Viitaniemi
|
Jorma Laaksonen
|
Birgitta Burger
|
Danny De Weerdt
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
We consider a non-intrusive computer-vision method for measuring the motion of a person performing natural signing in video recordings. The quality and usefulness of the method is compared to a traditional marker-based motion capture set-up. The accuracy of descriptors extracted from video footage is assessed qualitatively in the context of sign language analysis by examining if the shape of the curves produced by the different means resemble one another in sequences where the shape could be a source of valuable linguistic information. Then, quantitative comparison is performed first by correlating the computer-vision-based descriptors with the variables gathered with the motion capture equipment. Finally, multivariate linear and non-linar regression methods are applied for predicting the motion capture variables based on combinations of computer vision descriptors. The results show that even the simple computer vision method evaluated in this paper can produce promisingly good results for assisting researchers working on sign language analysis.