Micromodels for Efficient, Explainable, and Reusable Systems: A Case Study on Mental Health

Andrew Lee, Jonathan K. Kummerfeld, Larry An, Rada Mihalcea


Abstract
Many statistical models have high accuracy on test benchmarks, but are not explainable, struggle in low-resource scenarios, cannot be reused for multiple tasks, and cannot easily integrate domain expertise. These factors limit their use, particularly in settings such as mental health, where it is difficult to annotate datasets and model outputs have significant impact. We introduce a micromodel architecture to address these challenges. Our approach allows researchers to build interpretable representations that embed domain knowledge and provide explanations throughout the model’s decision process. We demonstrate the idea on multiple mental health tasks: depression classification, PTSD classification, and suicidal risk assessment. Our systems consistently produce strong results, even in low-resource scenarios, and are more interpretable than alternative methods.
Anthology ID:
2021.findings-emnlp.360
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4257–4272
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.360
DOI:
10.18653/v1/2021.findings-emnlp.360
Bibkey:
Cite (ACL):
Andrew Lee, Jonathan K. Kummerfeld, Larry An, and Rada Mihalcea. 2021. Micromodels for Efficient, Explainable, and Reusable Systems: A Case Study on Mental Health. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4257–4272, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Micromodels for Efficient, Explainable, and Reusable Systems: A Case Study on Mental Health (Lee et al., Findings 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.findings-emnlp.360.pdf
Video:
 https://aclanthology.org/2021.findings-emnlp.360.mp4
Code
 MichiganNLP/micromodels