Bill Noble


2023

pdf bib
Describe Me an Auklet: Generating Grounded Perceptual Category Descriptions
Bill Noble | Nikolai Ilinykh
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Human speakers can generate descriptions of perceptual concepts, abstracted from the instance-level. Moreover, such descriptions can be used by other speakers to learn provisional representations of those concepts. Learning and using abstract perceptual concepts is under-investigated in the language-and-vision field. The problem is also highly relevant to the field of representation learning in multi-modal NLP. In this paper, we introduce a framework for testing category-level perceptual grounding in multi-modal language models. In particular, we train separate neural networks to **generate** and **interpret** descriptions of visual categories. We measure the *communicative success* of the two models with the zero-shot classification performance of the interpretation model, which we argue is an indicator of perceptual grounding. Using this framework, we compare the performance of *prototype*- and *exemplar*-based representations. Finally, we show that communicative success exposes performance issues in the generation model, not captured by traditional intrinsic NLG evaluation metrics, and argue that these issues stem from a failure to properly ground language in vision at the category level.

2022

pdf bib
In Search of Meaning and Its Representations for Computational Linguistics
Simon Dobnik | Robin Cooper | Adam Ek | Bill Noble | Staffan Larsson | Nikolai Ilinykh | Vladislav Maraev | Vidya Somashekarappa
Proceedings of the 2022 CLASP Conference on (Dis)embodiment

In this paper we examine different meaning representations that are commonly used in different natural language applications today and discuss their limits, both in terms of the aspects of the natural language meaning they are modelling and in terms of the aspects of the application for which they are used.

pdf bib
Conditional Language Models for Community-Level Linguistic Variation
Bill Noble | Jean-philippe Bernardy
Proceedings of the Fifth Workshop on Natural Language Processing and Computational Social Science (NLP+CSS)

Community-level linguistic variation is a core concept in sociolinguistics. In this paper, we use conditioned neural language models to learn vector representations for 510 online communities. We use these representations to measure linguistic variation between commu-nities and investigate the degree to which linguistic variation corresponds with social connections between communities. We find that our sociolinguistic embeddings are highly correlated with a social network-based representation that does not use any linguistic input.

pdf bib
Classification Systems: Combining taxonomical and perceptual lexical meaning
Bill Noble | Staffan Larsson | Robin Cooper
Proceedings of the 3rd Natural Logic Meets Machine Learning Workshop (NALOMA III)

2021

pdf bib
Semantic shift in social networks
Bill Noble | Asad Sayeed | Raquel Fernández | Staffan Larsson
Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics

Just as the meaning of words is tied to the communities in which they are used, so too is semantic change. But how does lexical semantic change manifest differently across different communities? In this work, we investigate the relationship between community structure and semantic change in 45 communities from the social media website Reddit. We use distributional methods to quantify lexical semantic change and induce a social network on communities, based on interactions between members. We explore the relationship between semantic change and the clustering coefficient of a community’s social network graph, as well as community size and stability. While none of these factors are found to be significant on their own, we report a significant effect of their three-way interaction. We also report on significant word-level effects of frequency and change in frequency, which replicate previous findings.

pdf bib
Large-scale text pre-training helps with dialogue act recognition, but not without fine-tuning
Bill Noble | Vladislav Maraev
Proceedings of the 14th International Conference on Computational Semantics (IWCS)

We use dialogue act recognition (DAR) to investigate how well BERT represents utterances in dialogue, and how fine-tuning and large-scale pre-training contribute to its performance. We find that while both the standard BERT pre-training and pretraining on dialogue-like data are useful, task-specific fine-tuning is essential for good performance.

2020

pdf bib
Personae under uncertainty: The case of topoi
Bill Noble | Ellen Breitholtz | Robin Cooper
Proceedings of the Probability and Meaning Conference (PaM 2020)

In this paper, we propose a probabilistic model of social signalling which adopts a persona-based account of social meaning. We use this model to develop a socio-semantic theory of conventionalised reasoning patterns, known as topoi. On this account the social meaning of a topos, as conveyed in a argument, is based on the set of idealogically-related topoi it indicates in context. We draw a connection between the role of personae in social meaning and the category adjustment effect, a well-known psychological phenomenon in which the representation of a stimulus is biased in the direction of the category in which it falls. Finally, we situate the interpretation of social signals as an update to the information state of an agent in a formal TTR model of dialogue.

2015

pdf bib
Centre Stage: How Social Network Position Shapes Linguistic Coordination
Bill Noble | Raquel Fernández
Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics