Reconstructing Implicit Knowledge with Language Models

Maria Becker, Siting Liang, Anette Frank


Abstract
In this work we propose an approach for generating statements that explicate implicit knowledge connecting sentences in text. We make use of pre-trained language models which we refine by fine-tuning them on specifically prepared corpora that we enriched with implicit information, and by constraining them with relevant concepts and connecting commonsense knowledge paths. Manual and automatic evaluation of the generations shows that by refining language models as proposed, we can generate coherent and grammatically sound sentences that explicate implicit knowledge which connects sentence pairs in texts – on both in-domain and out-of-domain test data.
Anthology ID:
2021.deelio-1.2
Volume:
Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures
Month:
June
Year:
2021
Address:
Online
Editors:
Eneko Agirre, Marianna Apidianaki, Ivan Vulić
Venue:
DeeLIO
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–24
Language:
URL:
https://aclanthology.org/2021.deelio-1.2
DOI:
10.18653/v1/2021.deelio-1.2
Bibkey:
Cite (ACL):
Maria Becker, Siting Liang, and Anette Frank. 2021. Reconstructing Implicit Knowledge with Language Models. In Proceedings of Deep Learning Inside Out (DeeLIO): The 2nd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pages 11–24, Online. Association for Computational Linguistics.
Cite (Informal):
Reconstructing Implicit Knowledge with Language Models (Becker et al., DeeLIO 2021)
Copy Citation:
PDF:
https://aclanthology.org/2021.deelio-1.2.pdf
Optional supplementary data:
 2021.deelio-1.2.OptionalSupplementaryData.pdf
Code
 heidelberg-nlp/lms4implicit-knowledge-generation
Data
ConceptNetGenericsKBSNLIe-SNLI