2024
pdf
bib
abs
Modeling the Sacred: Considerations when Using Religious Texts in Natural Language Processing
Ben Hutchinson
Findings of the Association for Computational Linguistics: NAACL 2024
This position paper concerns the use of religious texts in Natural Language Processing (NLP), which is of special interest to the Ethics of NLP. Religious texts are expressions of culturally important values, and machine learned models have a propensity to reproduce cultural values encoded in their training data. Furthermore, translations of religious texts are frequently used by NLP researchers when language data is scarce. This repurposes the translations from their original uses and motivations, which often involve attracting new followers. This paper argues that NLP’s use of such texts raises considerations that go beyond model biases, including data provenance, cultural contexts, and their use in proselytism. We argue for more consideration of researcher positionality, and of the perspectives of marginalized linguistic and religious communities.
pdf
bib
abs
”It’s how you do things that matters”: Attending to Process to Better Serve Indigenous Communities with Language Technologies
Ned Cooper
|
Courtney Heldreth
|
Ben Hutchinson
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)
Indigenous languages are historically under-served by Natural Language Processing (NLP) technologies, but this is changing for some languages with the recent scaling of large multilingual models and an increased focus by the NLP community on endangered languages. This position paper explores ethical considerations in building NLP technologies for Indigenous languages, based on the premise that such projects should primarily serve Indigenous communities. We report on interviews with 17 researchers working in or with Aboriginal and/or Torres Strait Islander communities on language technology projects in Australia. Drawing on insights from the interviews, we recommend practices for NLP researchers to increase attention to the process of engagements with Indigenous communities, rather than focusing only on decontextualised artefacts.
2022
pdf
bib
abs
Underspecification in Scene Description-to-Depiction Tasks
Ben Hutchinson
|
Jason Baldridge
|
Vinodkumar Prabhakaran
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Questions regarding implicitness, ambiguity and underspecification are crucial for understanding the task validity and ethical concerns of multimodal image+text systems, yet have received little attention to date. This position paper maps out a conceptual framework to address this gap, focusing on systems which generate images depicting scenes from scene descriptions. In doing so, we account for how texts and images convey meaning differently. We outline a set of core challenges concerning textual and visual ambiguity, as well as risks that may be amplified by ambiguous and underspecified elements. We propose and discuss strategies for addressing these challenges, including generating visually ambiguous images, and generating a set of diverse images.
2020
pdf
bib
abs
Social Biases in NLP Models as Barriers for Persons with Disabilities
Ben Hutchinson
|
Vinodkumar Prabhakaran
|
Emily Denton
|
Kellie Webster
|
Yu Zhong
|
Stephen Denuyl
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Building equitable and inclusive NLP technologies demands consideration of whether and how social attitudes are represented in ML models. In particular, representations encoded in models often inadvertently perpetuate undesirable social biases from the data on which they are trained. In this paper, we present evidence of such undesirable biases towards mentions of disability in two different English language models: toxicity prediction and sentiment analysis. Next, we demonstrate that the neural embeddings that are the critical first step in most NLP pipelines similarly contain undesirable biases towards mentions of disability. We end by highlighting topical biases in the discourse about disability which may contribute to the observed model biases; for instance, gun violence, homelessness, and drug addiction are over-represented in texts discussing mental illness.
2019
pdf
bib
abs
Perturbation Sensitivity Analysis to Detect Unintended Model Biases
Vinodkumar Prabhakaran
|
Ben Hutchinson
|
Margaret Mitchell
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Data-driven statistical Natural Language Processing (NLP) techniques leverage large amounts of language data to build models that can understand language. However, most language data reflect the public discourse at the time the data was produced, and hence NLP models are susceptible to learning incidental associations around named referents at a particular point in time, in addition to general linguistic meaning. An NLP system designed to model notions such as sentiment and toxicity should ideally produce scores that are independent of the identity of such entities mentioned in text and their social associations. For example, in a general purpose sentiment analysis system, a phrase such as I hate Katy Perry should be interpreted as having the same sentiment as I hate Taylor Swift. Based on this idea, we propose a generic evaluation framework, Perturbation Sensitivity Analysis, which detects unintended model biases related to named entities, and requires no new annotations or corpora. We demonstrate the utility of this analysis by employing it on two different NLP models — a sentiment model and a toxicity model — applied on online comments in English language from four different genres.
2009
pdf
bib
Using the Web for Language Independent Spellchecking and Autocorrection
Casey Whitelaw
|
Ben Hutchinson
|
Grace Y Chung
|
Ged Ellis
Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing
2007
pdf
bib
TAT: An Author Profiling Tool with Application to Arabic Emails
Dominique Estival
|
Tanja Gaustad
|
Son Bao Pham
|
Will Radford
|
Ben Hutchinson
Proceedings of the Australasian Language Technology Workshop 2007
2005
pdf
bib
Modelling the Substitutability of Discourse Connectives
Ben Hutchinson
Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)
2004
pdf
bib
Acquiring the Meaning of Discourse Markers
Ben Hutchinson
Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL-04)
pdf
bib
abs
Mining the Web for Discourse Markers
Ben Hutchinson
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)
This paper proposes a methodology for obtaining sentences containing discourse markers from the World Wide Web. The proposed methodology is particularly suitable for collecting large numbers of discourse marker tokens. It relies on the automatic identification of discourse markers, and we show that this can be done with an accuracy within 9% of that of human performance. We also show that the distribution of discourse markers on the web correlates highly with those in a conventional balanced corpus.
2003
pdf
bib
Intrinsic versus Extrinsic Evaluations of Parsing Systems
Diego Mollá
|
Ben Hutchinson
Proceedings of the EACL 2003 Workshop on Evaluation Initiatives in Natural Language Processing: are evaluation methods, metrics and resources reusable?
1999
pdf
bib
A valency dictionary architecture for Machine Translation
Timothy Baldwin
|
Francis Bond
|
Ben Hutchinson
Proceedings of the 8th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages