Giovanni Bonetta
2025
All-in-one: Understanding and Generation in Multimodal Reasoning with the MAIA Benchmark
Davide Testa
|
Giovanni Bonetta
|
Raffaella Bernardi
|
Alessandro Bondielli
|
Alessandro Lenci
|
Alessio Miaschi
|
Lucia Passaro
|
Bernardo Magnini
Findings of the Association for Computational Linguistics: EMNLP 2025
We introduce MAIA (Multimodal AI Assessment), a native-Italian benchmark designed for fine-grained investigation of the reasoning abilities of visual language models on videos. MAIA differs from other available video benchmarks for its design, its reasoning categories, the metric it uses, and the language and culture of the videos. MAIA evaluates Vision Language Models (VLMs) on two aligned tasks: a visual statement verification task, and an open-ended visual question-answering task, both on the same set of video-related questions. It considers twelve reasoning categories that aim to disentangle language and vision relations by highlighting the role of the visual input. Thanks to its carefully taught design, it evaluates VLMs’ consistency and visually grounded natural language comprehension and generation simultaneously through an aggregated metric revealing low results that highlight models’ fragility. Last but not least, the video collection has been carefully selected to reflect the Italian culture, and the language data are produced by native-speakers.Data available at *[GitHub](https://github.com/Caput97/MAIA-Multimodal_AI_Assessment.git).*
2024
Are You a Good Assistant? Assessing LLM Trustability in Task-oriented Dialogues
Tiziano Labruna
|
Sofia Brenna
|
Giovanni Bonetta
|
Bernardo Magnini
Proceedings of the Tenth Italian Conference on Computational Linguistics (CLiC-it 2024)
Despite the impressive capabilities of recent Large Language Models (LLMs) to generate human-like text, their ability to produce contextually appropriate content for specific communicative situations is still a matter of debate. This issue is particularly crucial when LLMs are employed as assistants to help solve tasks or achieve goals within a given conversational domain. In such scenarios, the assistant is expected to access specific knowledge (e.g., a database of restaurants, a calendar of appointments) that is not directly accessible to the user and must be consistently utilised to accomplish the task.In this paper, we conduct experiments to evaluate the trustworthiness of automatic assistants in task-oriented dialogues. Our findings indicate that state-of-the-art open-source LLMs still face significant challenges in maintaining logical consistency with a knowledge base of facts, highlighting the need for further advancements in this area.
Search
Fix author
Co-authors
- Bernardo Magnini 2
- Raffaella Bernardi 1
- Alessandro Bondielli 1
- Sofia Brenna 1
- Tiziano Labruna 1
- show all...