Stefan Hahn
2020
Generating Medical Reports from Patient-Doctor Conversations Using Sequence-to-Sequence Models
Seppo Enarvi
|
Marilisa Amoia
|
Miguel Del-Agua Teba
|
Brian Delaney
|
Frank Diehl
|
Stefan Hahn
|
Kristina Harris
|
Liam McGrath
|
Yue Pan
|
Joel Pinto
|
Luca Rubini
|
Miguel Ruiz
|
Gagandeep Singh
|
Fabian Stemmer
|
Weiyi Sun
|
Paul Vozila
|
Thomas Lin
|
Ranjani Ramamurthy
Proceedings of the First Workshop on Natural Language Processing for Medical Conversations
We discuss automatic creation of medical reports from ASR-generated patient-doctor conversational transcripts using an end-to-end neural summarization approach. We explore both recurrent neural network (RNN) and Transformer-based sequence-to-sequence architectures for summarizing medical conversations. We have incorporated enhancements to these architectures, such as the pointer-generator network that facilitates copying parts of the conversations to the reports, and a hierarchical RNN encoder that makes RNN training three times faster with long inputs. A comparison of the relative improvements from the different model architectures over an oracle extractive baseline is provided on a dataset of 800k orthopedic encounters. Consistent with observations in literature for machine translation and related tasks, we find the Transformer models outperform RNN in accuracy, while taking less than half the time to train. Significantly large wins over a strong oracle baseline indicate that sequence-to-sequence modeling is a promising approach for automatic generation of medical reports, in the presence of data at scale.
2008
A Comparison of Various Methods for Concept Tagging for Spoken Language Understanding
Stefan Hahn
|
Patrick Lehnen
|
Christian Raymond
|
Hermann Ney
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
The extraction of flat concepts out of a given word sequence is usually one of the first steps in building a spoken language understanding (SLU) or dialogue system. This paper explores five different modelling approaches for this task and presents results on a French state-of-the-art corpus, MEDIA. Additionally, two log-linear modelling approaches could be further improved by adding morphologic knowledge. This paper goes beyond what has been reported in the literature. We applied the models on the same training and testing data and used the NIST scoring toolkit to evaluate the experimental results to ensure identical conditions for each of the experiments and the comparability of the results. Using a model based on conditional random fields, we achieve a concept error rate of 11.8% on the MEDIA evaluation corpus.
Search
Co-authors
- Patrick Lehnen 1
- Christian Raymond 1
- Hermann Ney 1
- Seppo Enarvi 1
- Marilisa Amoia 1
- show all...