Guillaume Genthial
2019
Effective Feature Representation for Clinical Text Concept Extraction
Yifeng Tao
|
Bruno Godefroy
|
Guillaume Genthial
|
Christopher Potts
Proceedings of the 2nd Clinical Natural Language Processing Workshop
Crucial information about the practice of healthcare is recorded only in free-form text, which creates an enormous opportunity for high-impact NLP. However, annotated healthcare datasets tend to be small and expensive to obtain, which raises the question of how to make maximally efficient uses of the available data. To this end, we develop an LSTM-CRF model for combining unsupervised word representations and hand-built feature representations derived from publicly available healthcare ontologies. We show that this combined model yields superior performance on five datasets of diverse kinds of healthcare text (clinical, social, scientific, commercial). Each involves the labeling of complex, multi-word spans that pick out different healthcare concepts. We also introduce a new labeled dataset for identifying the treatment relations between drugs and diseases.
2018
Noising and Denoising Natural Language: Diverse Backtranslation for Grammar Correction
Ziang Xie
|
Guillaume Genthial
|
Stanley Xie
|
Andrew Ng
|
Dan Jurafsky
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
Translation-based methods for grammar correction that directly map noisy, ungrammatical text to their clean counterparts are able to correct a broad range of errors; however, such techniques are bottlenecked by the need for a large parallel corpus of noisy and clean sentence pairs. In this paper, we consider synthesizing parallel data by noising a clean monolingual corpus. While most previous approaches introduce perturbations using features computed from local context windows, we instead develop error generation processes using a neural sequence transduction model trained to translate clean examples to their noisy counterparts. Given a corpus of clean examples, we propose beam search noising procedures to synthesize additional noisy examples that human evaluators were nearly unable to discriminate from nonsynthesized examples. Surprisingly, when trained on additional data synthesized using our best-performing noising scheme, our model approaches the same performance as when trained on additional nonsynthesized data.
Search
Co-authors
- Ziang Xie 1
- Stanley Xie 1
- Andrew Y. Ng 1
- Dan Jurafsky 1
- Yifeng Tao 1
- show all...