Centre for Linguistic Theory and Studies in Probability Conference (2022)


up

pdf (full)
bib (full)
Proceedings of the 2022 CLASP Conference on (Dis)embodiment

pdf bib
Proceedings of the 2022 CLASP Conference on (Dis)embodiment
Simon Dobnik | Julian Grove | Asad Sayeed

pdf bib
A Small but Informed and Diverse Model: The Case of the Multimodal GuessWhat!? Guessing Game
Claudio Greco | Alberto Testoni | Raffaella Bernardi | Stella Frank

Pre-trained Vision and Language Transformers achieve high performance on downstream tasks due to their ability to transfer representational knowledge accumulated during pretraining on substantial amounts of data. In this paper, we ask whether it is possible to compete with such models using features based on transferred (pre-trained, frozen) representations combined with a lightweight architecture. We take a multimodal guessing task as our testbed, GuessWhat?!. An ensemble of our lightweight model matches the performance of the finetuned pre-trained transformer (LXMERT). An uncertainty analysis of our ensemble shows that the lightweight transferred representations close the data uncertainty gap with LXMERT, while retaining model diversity leading to ensemble boost. We further demonstrate that LXMERT’s performance gain is due solely to its extra V&L pretraining rather than because of architectural improvements. These results argue for flexible integration of multiple features and lightweight models as a viable alternative to large, cumbersome, pre-trained models.

pdf bib
A Cross-lingual Comparison of Human and Model Relative Word Importance
Felix Morger | Stephanie Brandl | Lisa Beinborn | Nora Hollenstein

Relative word importance is a key metric for natural language processing. In this work, we compare human and model relative word importance to investigate if pretrained neural language models focus on the same words as humans cross-lingually. We perform an extensive study using several importance metrics (gradient-based saliency and attention-based) in monolingual and multilingual models, including eye-tracking corpora from four languages (German, Dutch, English, and Russian). We find that gradient-based saliency, first-layer attention, and attention flow correlate strongly with human eye-tracking data across all four languages. We further analyze the role of word length and word frequency in determining relative importance and find that it strongly correlates with length and frequency, however, the mechanisms behind these non-linear relations remain elusive. We obtain a cross-lingual approximation of the similarity between human and computational language processing and insights into the usability of several importance metrics.

pdf bib
Dispatcher: A Message-Passing Approach to Language Modelling
Alberto Cetoli

This paper proposes a message-passing mechanism to address language modelling. A new layer type is introduced that aims to substitute self-attention for unidirectional sequence generation tasks. The system is shown to be competitive with existing methods: Given N tokens, the computational complexity is O(N logN) and the memory complexity is O(N) under reasonable assumptions. In the end, the Dispatcher layer is seen to achieve comparable perplexity to self-attention while being more efficient.

pdf bib
In Search of Meaning and Its Representations for Computational Linguistics
Simon Dobnik | Robin Cooper | Adam Ek | Bill Noble | Staffan Larsson | Nikolai Ilinykh | Vladislav Maraev | Vidya Somashekarappa

In this paper we examine different meaning representations that are commonly used in different natural language applications today and discuss their limits, both in terms of the aspects of the natural language meaning they are modelling and in terms of the aspects of the application for which they are used.

pdf bib
Can We Use Small Models to Investigate Multimodal Fusion Methods?
Lovisa Hagström | Tobias Norlund | Richard Johansson

Many successful methods for fusing language with information from the visual modality have recently been proposed and the topic of multimodal training is ever evolving. However, it is still largely not known what makes different vision-and-language models successful. Investigations into this are made difficult by the large sizes of the models used, requiring large training datasets and causing long train and compute times. Therefore, we propose the idea of studying multimodal fusion methods in a smaller setting with small models and datasets. In this setting, we can experiment with different approaches for fusing multimodal information with language in a controlled fashion, while allowing for fast experimentation. We illustrate this idea with the math arithmetics sandbox. This is a setting in which we fuse language with information from the math modality and strive to replicate some fusion methods from the vision-and-language domain. We find that some results for fusion methods from the larger domain translate to the math arithmetics sandbox, indicating a promising future avenue for multimodal model prototyping.

pdf bib
Embodied Interaction in Mental Health Consultations: Some Observations on Grounding and Repair
Jing Hui Law | Patrick Healey | Rosella Galindo Esparza

Shared physical space is an important resource for face-to-face interaction. People use the position and orientation of their bodies—relative to each other and relative to the physical environment—to determine who is part of a conversation, to manage conversational roles (e.g. speaker, addressee, side-participant) and to help co-ordinate turn-taking. These embodied uses of shared space also extend to more fine-grained aspects of interaction, such as gestures and body movements, to support topic management, orchestration of turns and grounding. This paper explores the role of embodied resources in (mis)communication in a corpus of mental health consultations. We illustrate some of the specific ways in which clinicians and patients can exploit embodiment and the position of objects in shared space to diagnose and manage moments of misunderstanding.

pdf bib
Norm Participation Grounds Language
David Schlangen

The striking recent advances in eliciting seemingly meaningful language behaviour from language-only machine learning models have only made more apparent, through the surfacing of clear limitations, the need to go beyond the language-only mode and to ground these models “in the world”. Proposals for doing so vary in the details, but what unites them is that the solution is sought in the addition of non-linguistic data types such as images or video streams, while largely keeping the mode of learning constant. I propose a different, and more wide-ranging conception of how grounding should be understood: What grounds language is its normative nature. There are standards for doing things right, these standards are public and authoritative, while at the same time acceptance of authority can and must be disputed and negotiated, in interactions in which only bearers of normative status can rightfully participate. What grounds language, then, is the determined use that language users make of it, and what it is grounded in is the community of language users. I sketch this idea, and draw some conclusions for work on computational modelling of meaningful language use.

pdf bib
Where Am I and Where Should I Go? Grounding Positional and Directional Labels in a Disoriented Human Balancing Task
Sheikh Mannan | Nikhil Krishnaswamy

In this paper, we present an approach toward grounding linguistic positional and directional labels directly to human motions in the course of a disoriented balancing task in a multi-axis rotational device. We use deep neural models to predict human subjects’ joystick motions as well as the subjects’ proficiency in the task, combined with BERT embedding vectors for positional and directional labels extracted from annotations into an embodied direction classifier. We find that combining contextualized BERT embeddings with embeddings describing human motion and proficiency can successfully predict the direction a hypothetical human participant should move to achieve better balance with accuracy that is comparable to a moderately-proficient balancing task subject, and that our combined embodied model may actually make decisions that are objectively better than decisions made by some humans.

pdf bib
From Speed to Car and Back: An Exploratory Study about Associations between Abstract Nouns and Images
Ludovica Cerini | Eliana Di Palma | Alessandro Lenci

Abstract concepts, notwithstanding their lack of physical referents in real world, are grounded in sensorimotor experience. In fact, images depicting concrete entities may be associated to abstract concepts, both via direct and indirect grounding processes. However, what are the links connecting the concrete concepts represented by images and abstract ones is still unclear. To investigate these links, we conducted a preliminary study collecting word association data and image-abstract word pair ratings, to identify whether the associations between visual and verbal systems rely on the same conceptual mappings. The goal of this research is to understand to what extent linguistic associations could be confirmed with visual stimuli, in order to have a starting point for multimodal analysis of abstract and concrete concepts.