Explaining Neural NLP Models for the Joint Analysis of Open-and-Closed-Ended Survey Answers

Edoardo Mosca, Katharina Harmann, Tobias Eder, Georg Groh


Abstract
Large-scale surveys are a widely used instrument to collect data from a target audience. Beyond the single individual, an appropriate analysis of the answers can reveal trends and patterns and thus generate new insights and knowledge for researchers. Current analysis practices employ shallow machine learning methods or rely on (biased) human judgment. This work investigates the usage of state-of-the-art NLP models such as BERT to automatically extract information from both open- and closed-ended questions. We also leverage explainability methods at different levels of granularity to further derive knowledge from the analysis model. Experiments on EMS—a survey-based study researching influencing factors affecting a student’s career goals—show that the proposed approach can identify such factors both at the input- and higher concept-level.
Anthology ID:
2022.trustnlp-1.5
Volume:
Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022)
Month:
July
Year:
2022
Address:
Seattle, U.S.A.
Editors:
Apurv Verma, Yada Pruksachatkun, Kai-Wei Chang, Aram Galstyan, Jwala Dhamala, Yang Trista Cao
Venue:
TrustNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
49–63
Language:
URL:
https://aclanthology.org/2022.trustnlp-1.5
DOI:
10.18653/v1/2022.trustnlp-1.5
Bibkey:
Cite (ACL):
Edoardo Mosca, Katharina Harmann, Tobias Eder, and Georg Groh. 2022. Explaining Neural NLP Models for the Joint Analysis of Open-and-Closed-Ended Survey Answers. In Proceedings of the 2nd Workshop on Trustworthy Natural Language Processing (TrustNLP 2022), pages 49–63, Seattle, U.S.A.. Association for Computational Linguistics.
Cite (Informal):
Explaining Neural NLP Models for the Joint Analysis of Open-and-Closed-Ended Survey Answers (Mosca et al., TrustNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.trustnlp-1.5.pdf