Lukas Hilgert
2026
BOOM: Beyond Only One Modality KIT’s Multimodal Multilingual Lecture Companion
Sai Koneru | Fabian Retkowski | Christian Huber | Lukas Hilgert | Seymanur Akti | Enes Yavuz Ugan | Alexander Waibel | Jan Niehues
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Sai Koneru | Fabian Retkowski | Christian Huber | Lukas Hilgert | Seymanur Akti | Enes Yavuz Ugan | Alexander Waibel | Jan Niehues
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
The globalization of education and rapid growth of online learning have made localizing educational content a critical challenge. Lecture materials are inherently multimodal, combining spoken audio with visual slides, which requires systems capable of processing multiple input modalities. To provide an accessible and complete learning experience, translations must preserve all modalities: text for reading, slides for visual understanding, and speech for auditory learning. We present BOOM, a multimodal multilingual lecture companion that jointly translates lecture audio and slides to produce synchronized outputs across three modalities: translated text, localized slides with preserved visual elements, and synthesized speech. This end-to-end approach enables students to access lectures in their native language while aiming to preserve the original content in its entirety. Our experiments demonstrate that slide-aware transcripts also yield cascading benefits for downstream tasks such as summarization and question answering. We release our Slide Translation code at https://github.com/saikoneru/image-translator and integrate it in Lecture Translator at https://gitlab.kit.edu/kit/isl-ai4lt/lt-middleware/ltpipeline[All released code and models are licensed under the MIT License].
2025
Next Speaker Prediction for Multi-Speaker Dialogue with Large Language Models
Lukas Hilgert | Jan Niehues
Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP-2025)
Lukas Hilgert | Jan Niehues
Proceedings of the 8th International Conference on Natural Language and Speech Processing (ICNLSP-2025)
2024
Evaluating and Training Long-Context Large Language Models for Question Answering on Scientific Papers
Lukas Hilgert | Danni Liu | Jan Niehues
Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U)
Lukas Hilgert | Danni Liu | Jan Niehues
Proceedings of the 1st Workshop on Customizable NLP: Progress and Challenges in Customizing NLP for a Domain, Application, Group, or Individual (CustomNLP4U)
With the number of scientific papers published every year growing and current large language models (LLMs) showing state-of-the-art performance on natural language processing (NLP) tasks, we ask the question if LLMs could be utilized to answer questions on scientific papers.We investigate how well state-of-the-art large language models (LLMs) can answer questions on scientific paper by experimenting with long-context versions of the LLaMA 2 model and evaluating and training on the Qasper dataset.We analyze how well the LLMs handle longer papers and questions that can only be answered by accessing information from far out paragraphs. During our experiments, we see that the performance of these LLMs drops with growing length and position of relevant information.We employ different measures from simple prompts to chain-of-thought prompts and zero-shot usage to fine-tuning with QLoRA.While we still observe a performance loss with increased context length, our measures reduce the effects of this flaw, and we can achieve F1 scores similar to bigger models like GPT-4.