Lawrence Hunter

Also published as: Lawrence E. Hunter


2024

pdf bib
It Is Not About What You Say, It Is About How You Say It: A Surprisingly Simple Approach for Improving Reading Comprehension
Sagi Shaier | Lawrence Hunter | Katharina Wense
Findings of the Association for Computational Linguistics: ACL 2024

Natural language processing has seen rapid progress over the past decade. Due to the speed of developments, some practices get established without proper evaluation. Considering one such case and focusing on reading comprehension, we ask our first research question: 1) How does the order of inputs – i.e., question and context – affect model performance? Additionally, given recent advancements in input emphasis, we ask a second research question: 2) Does emphasizing either the question, the context, or both enhance performance? Experimenting with 9 large language models across 3 datasets, we find that presenting the context before the question improves model performance, with an accuracy increase of up to 31%. Furthermore, emphasizing the context yields superior results compared to question emphasis, and in general, emphasizing parts of the input is particularly effective for addressing questions that models lack the parametric knowledge to answer. Experimenting with both prompt-based and attention-based emphasis methods, we additionally find that the best method is surprisingly simple: it only requires concatenating a few tokens to the input and results in an ac- curacy improvement of up to 36%, allowing smaller models to outperform their significantly larger counterparts.

pdf bib
Comparing Template-based and Template-free Language Model Probing
Sagi Shaier | Kevin Bennett | Lawrence Hunter | Katharina von der Wense
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

The differences between cloze-task language model (LM) probing with 1) expert-made templates and 2) naturally-occurring text have often been overlooked. Here, we evaluate 16 different LMs on 10 probing English datasets – 4 template-based and 6 template-free – in general and biomedical domains to answer the following research questions: (RQ1) Do model rankings differ between the two approaches? (RQ2) Do models’ absolute scores differ between the two approaches? (RQ3) Do the answers to RQ1 and RQ2 differ between general and domain-specific models? Our findings are: 1) Template-free and template-based approaches often rank models differently, except for the top domain- specific models. 2) Scores decrease by up to 42% Acc@1 when comparing parallel template-free and template-based prompts. 3) Perplexity is negatively correlated with accuracy in the template-free approach, but, counter-intuitively, they are positively correlated for template-based probing. 4) Models tend to predict the same answers frequently across prompts for template-based probing, which is less common when employing template-free techniques.

pdf bib
Desiderata For The Context Use Of Question Answering Systems
Sagi Shaier | Lawrence Hunter | Katharina von der Wense
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

Prior work has uncovered a set of common problems in state-of-the-art context-based question answering (QA) systems: a lack of attention to the context when the latter conflicts with a model’s parametric knowledge, little robustness to noise, and a lack of consistency with their answers. However, most prior work focus on one or two of those problems in isolation, which makes it difficult to see trends across them. We aim to close this gap, by first outlining a set of – previously discussed as well as novel – desiderata for QA models. We then survey relevant analysis and methods papers to provide an overview of the state of the field. The second part of our work presents experiments where we evaluate 15 QA systems on 5 datasets according to all desiderata at once. We find many novel trends, including (1) systems that are less susceptible to noise are not necessarily more consistent with their answers when given irrelevant context; (2) most systems that are more susceptible to noise are more likely to correctly answer according to a context that conflicts with their parametric knowledge; and (3) the combination of conflicting knowledge and noise can reduce system performance by up to 96%. As such, our desiderata help increase our understanding of how these models work and reveal potential avenues for improvements.

2023

pdf bib
Emerging Challenges in Personalized Medicine: Assessing Demographic Effects on Biomedical Question Answering Systems
Sagi Shaier | Kevin Bennett | Lawrence Hunter | Katharina Kann
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Who Are All The Stochastic Parrots Imitating? They Should Tell Us!
Sagi Shaier | Lawrence Hunter | Katharina Kann
Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (Volume 2: Short Papers)

2019

pdf bib
CRAFT Shared Tasks 2019 Overview — Integrated Structure, Semantics, and Coreference
William Baumgartner | Michael Bada | Sampo Pyysalo | Manuel R. Ciosici | Negacy Hailu | Harrison Pielke-Lombardo | Michael Regan | Lawrence Hunter
Proceedings of the 5th Workshop on BioNLP Open Shared Tasks

As part of the BioNLP Open Shared Tasks 2019, the CRAFT Shared Tasks 2019 provides a platform to gauge the state of the art for three fundamental language processing tasks — dependency parse construction, coreference resolution, and ontology concept identification — over full-text biomedical articles. The structural annotation task requires the automatic generation of dependency parses for each sentence of an article given only the article text. The coreference resolution task focuses on linking coreferring base noun phrase mentions into chains using the symmetrical and transitive identity relation. The ontology concept annotation task involves the identification of concept mentions within text using the classes of ten distinct ontologies in the biomedical domain, both unmodified and augmented with extension classes. This paper provides an overview of each task, including descriptions of the data provided to participants and the evaluation metrics used, and discusses participant results relative to baseline performances for each of the three tasks.

2018

pdf bib
Three Dimensions of Reproducibility in Natural Language Processing
K. Bretonnel Cohen | Jingbo Xia | Pierre Zweigenbaum | Tiffany Callahan | Orin Hargraves | Foster Goss | Nancy Ide | Aurélie Névéol | Cyril Grouin | Lawrence E. Hunter
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2013

pdf bib
UColorado_SOM: Extraction of Drug-Drug Interactions from Biomedical Text using Knowledge-rich and Knowledge-poor Features
Negacy Hailu | Lawrence E. Hunter | K. Bretonnel Cohen
Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013)

2011

pdf bib
Fast and simple semantic class assignment for biomedical text
K. Bretonnel Cohen | Thomas Christiansen | William Baumgartner Jr. | Karin Verspoor | Lawrence Hunter
Proceedings of BioNLP 2011 Workshop

pdf bib
A scaleable automated quality assurance technique for semantic representations and proposition banks
K. Bretonnel Cohen | Lawrence Hunter | Martha Palmer
Proceedings of the 5th Linguistic Annotation Workshop

2010

pdf bib
An Overview of the CRAFT Concept Annotation Guidelines
Michael Bada | Miriam Eckert | Martha Palmer | Lawrence Hunter
Proceedings of the Fourth Linguistic Annotation Workshop

pdf bib
Test Suite Design for Biomedical Ontology Concept Recognition Systems
K. Bretonnel Cohen | Christophe Roeder | William A. Baumgartner Jr. | Lawrence E. Hunter | Karin Verspoor
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)

Systems that locate mentions of concepts from ontologies in free text are known as ontology concept recognition systems. This paper describes an approach to the evaluation of the workings of ontology concept recognition systems through use of a structured test suite and presents a publicly available test suite for this purpose. It is built using the principles of descriptive linguistic fieldwork and of software testing. More broadly, we also seek to investigate what general principles might inform the construction of such test suites. The test suite was found to be effective in identifying performance errors in an ontology concept recognition system. The system could not recognize 2.1% of all canonical forms and no non-canonical forms at all. Regarding the question of general principles of test suite construction, we compared this test suite to a named entity recognition test suite constructor. We found that they had twenty features in total and that seven were shared between the two models, suggesting that there is a core of feature types that may be applicable to test suite construction for any similar type of application.

2009

pdf bib
High-precision biological event extraction with a concept recognizer
K. Bretonnel Cohen | Karin Verspoor | Helen Johnson | Chris Roeder | Philip Ogren | William Baumgartner | Elizabeth White | Lawrence Hunter
Proceedings of the BioNLP 2009 Workshop Companion Volume for Shared Task

2008

pdf bib
Software Testing and the Naturally Occurring Data Assumption in Natural Language Processing
K. Bretonnel Cohen | William A. Baumgartner Jr. | Lawrence Hunter
Software Engineering, Testing, and Quality Assurance for Natural Language Processing

2006

pdf bib
Refactoring Corpora
Helen L. Johnson | William A. Baumgartner Jr. | Martin Krallinger | K. Bretonnel Cohen | Lawrence Hunter
Proceedings of the HLT-NAACL BioNLP Workshop on Linking Natural Language and Biology

2005

pdf bib
Corpus Design for Biomedical Natural Language Processing
K. Bretonnel Cohen | Lynne Fox | Philip V. Ogren | Lawrence Hunter
Proceedings of the ACL-ISMB Workshop on Linking Biological Literature, Ontologies and Databases: Mining Biological Semantics

2004

pdf bib
A Resource for Constructing Customized Test Suites for Molecular Biology Entity Identification Systems
K. Bretonnel Cohen | Lorraine Tanabe | Shuhei Kinoshita | Lawrence Hunter
HLT-NAACL 2004 Workshop: Linking Biological Literature, Ontologies and Databases

2002

pdf bib
Contrast and variability in gene names
K. Bretonnel Cohen | Andrew Dolbey | George Acquaah-Mensah | Lawrence Hunter
Proceedings of the ACL-02 Workshop on Natural Language Processing in the Biomedical Domain

2000

pdf bib
Extracting Molecular Binding Relationships from Biomedical Text
Thomas C. Rindflesch | Jayant V. Rajan | Lawrence Hunter
Sixth Applied Natural Language Processing Conference