2024
pdf
bib
abs
Prompting open-source and commercial language models for grammatical error correction of English learner text
Christopher Davis
|
Andrew Caines
|
Øistein E. Andersen
|
Shiva Taslimipoor
|
Helen Yannakoudakis
|
Zheng Yuan
|
Christopher Bryant
|
Marek Rei
|
Paula Buttery
Findings of the Association for Computational Linguistics: ACL 2024
Thanks to recent advances in generative AI, we are able to prompt large language models (LLMs) to produce texts which are fluent and grammatical. In addition, it has been shown that we can elicit attempts at grammatical error correction (GEC) from LLMs when prompted with ungrammatical input sentences. We evaluate how well LLMs can perform at GEC by measuring their performance on established benchmark datasets. We go beyond previous studies, which only examined GPT* models on a selection of English GEC datasets, by evaluating seven open-source and three commercial LLMs on four established GEC benchmarks. We investigate model performance and report results against individual error types. Our results indicate that LLMs do not always outperform supervised English GEC models except in specific contexts – namely commercial LLMs on benchmarks annotated with fluency corrections as opposed to minimal edits. We find that several open-source models outperform commercial ones on minimal edit benchmarks, and that in some settings zero-shot prompting is just as competitive as few-shot prompting.
2023
pdf
bib
CLIMB – Curriculum Learning for Infant-inspired Model Building
Richard Diehl Martinez
|
Hope McGovern
|
Zebulon Goriely
|
Christopher Davis
|
Andrew Caines
|
Paula Buttery
|
Lisa Beinborn
Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning
pdf
bib
An empirical, corpus-based, approach to Cantonese nominal expressions
Gr ̈¦goire Winterstein
|
David Vergnaud
|
Hannah Hoi Tung Yu
|
J ̈¦r ̈¦mie Lupien
|
Laperle Samuel
|
Pei Sui Luk
|
Christopher Davis
Proceedings of the 37th Pacific Asia Conference on Language, Information and Computation
2022
pdf
bib
abs
Probing for targeted syntactic knowledge through grammatical error detection
Christopher Davis
|
Christopher Bryant
|
Andrew Caines
|
Marek Rei
|
Paula Buttery
Proceedings of the 26th Conference on Computational Natural Language Learning (CoNLL)
Targeted studies testing knowledge of subject-verb agreement (SVA) indicate that pre-trained language models encode syntactic information. We assert that if models robustly encode subject-verb agreement, they should be able to identify when agreement is correct and when it is incorrect. To that end, we propose grammatical error detection as a diagnostic probe to evaluate token-level contextual representations for their knowledge of SVA. We evaluate contextual representations at each layer from five pre-trained English language models: BERT, XLNet, GPT-2, RoBERTa and ELECTRA. We leverage public annotated training data from both English second language learners and Wikipedia edits, and report results on manually crafted stimuli for subject-verb agreement. We find that masked language models linearly encode information relevant to the detection of SVA errors, while the autoregressive models perform on par with our baseline. However, we also observe a divergence in performance when probes are trained on different training sets, and when they are evaluated on different syntactic constructions, suggesting the information pertaining to SVA error detection is not robustly encoded.
2021
pdf
bib
abs
Multi-Class Grammatical Error Detection for Correction: A Tale of Two Systems
Zheng Yuan
|
Shiva Taslimipoor
|
Christopher Davis
|
Christopher Bryant
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
In this paper, we show how a multi-class grammatical error detection (GED) system can be used to improve grammatical error correction (GEC) for English. Specifically, we first develop a new state-of-the-art binary detection system based on pre-trained ELECTRA, and then extend it to multi-class detection using different error type tagsets derived from the ERRANT framework. Output from this detection system is used as auxiliary input to fine-tune a novel encoder-decoder GEC model, and we subsequently re-rank the N-best GEC output to find the hypothesis that most agrees with the GED output. Results show that fine-tuning the GEC system using 4-class GED produces the best model, but re-ranking using 55-class GED leads to the best performance overall. This suggests that different multi-class GED systems benefit GEC in different ways. Ultimately, our system outperforms all other previous work that combines GED and GEC, and achieves a new single-model NMT-based state of the art on the BEA-test benchmark.
2019
pdf
bib
abs
Deconstructing multimodality: visual properties and visual context in human semantic processing
Christopher Davis
|
Luana Bulat
|
Anita Lilla Vero
|
Ekaterina Shutova
Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)
Multimodal semantic models that extend linguistic representations with additional perceptual input have proved successful in a range of natural language processing (NLP) tasks. Recent research has successfully used neural methods to automatically create visual representations for words. However, these works have extracted visual features from complete images, and have not examined how different kinds of visual information impact performance. In contrast, we construct multimodal models that differentiate between internal visual properties of the objects and their external visual context. We evaluate the models on the task of decoding brain activity associated with the meanings of nouns, demonstrating their advantage over those based on complete images.
2010
pdf
bib
Decision Theory and Discourse Particles: A Case Study from a Large Japanese Sentiment Corpus
Christopher Davis
Proceedings of the 24th Pacific Asia Conference on Language, Information and Computation