Zelalem Gero
2024
DocLens: Multi-aspect Fine-grained Medical Text Evaluation
Yiqing Xie
|
Sheng Zhang
|
Hao Cheng
|
Pengfei Liu
|
Zelalem Gero
|
Cliff Wong
|
Tristan Naumann
|
Hoifung Poon
|
Carolyn Rose
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Medical text generation aims to assist with administrative work and highlight salient information to support decision-making.To reflect the specific requirements of medical text, in this paper, we propose a set of metrics to evaluate the completeness, conciseness, and attribution of the generated text at a fine-grained level. The metrics can be computed by various types of evaluators including instruction-following (both proprietary and open-source) and supervised entailment models. We demonstrate the effectiveness of the resulting framework, DocLens, with three evaluators on three tasks: clinical note generation, radiology report summarization, and patient question summarization. A comprehensive human study shows that DocLens exhibits substantially higher agreement with the judgments of medical experts than existing metrics. The results also highlight the need to improve open-source evaluators and suggest potential directions. We released the code at https://github.com/yiqingxyq/DocLens.
2021
Word centrality constrained representation for keyphrase extraction
Zelalem Gero
|
Joyce Ho
Proceedings of the 20th Workshop on Biomedical Language Processing
To keep pace with the increased generation and digitization of documents, automated methods that can improve search, discovery and mining of the vast body of literature are essential. Keyphrases provide a concise representation by identifying salient concepts in a document. Various supervised approaches model keyphrase extraction using local context to predict the label for each token and perform much better than the unsupervised counterparts. Unfortunately, this method fails for short documents where the context is unclear. Moreover, keyphrases, which are usually the gist of a document, need to be the central theme. We propose a new extraction model that introduces a centrality constraint to enrich the word representation of a Bidirectional long short-term memory. Performance evaluation on 2 publicly available datasets demonstrate our model outperforms existing state-of-the art approaches.
Search
Co-authors
- Joyce Ho 1
- Yiqing Xie 1
- Sheng Zhang 1
- Hao Cheng 1
- Pengfei Liu 1
- show all...