Efficient and Robust Knowledge Graph Construction

Ningyu Zhang, Tao Gui, Guoshun Nan


Abstract
Knowledge graph construction which aims to extract knowledge from the text corpus, has appealed to the NLP community researchers. Previous decades have witnessed the remarkable progress of knowledge graph construction on the basis of neural models; however, those models often cost massive computation or labeled data resources and suffer from unstable inference accounting for biased or adversarial samples. Recently, numerous approaches have been explored to mitigate the efficiency and robustness issues for knowledge graph construction, such as prompt learning and adversarial training. In this tutorial, we aim to bring interested NLP researchers up to speed on the recent and ongoing techniques for efficient and robust knowledge graph construction. Additionally, our goal is to provide a systematic and up-to-date overview of these methods and reveal new research opportunities to the audience.
Anthology ID:
2022.aacl-tutorials.1
Volume:
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Tutorial Abstracts
Month:
November
Year:
2022
Address:
Taipei
Editors:
Miguel A. Alonso, Zhongyu Wei
Venues:
AACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–7
Language:
URL:
https://aclanthology.org/2022.aacl-tutorials.1
DOI:
Bibkey:
Cite (ACL):
Ningyu Zhang, Tao Gui, and Guoshun Nan. 2022. Efficient and Robust Knowledge Graph Construction. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Tutorial Abstracts, pages 1–7, Taipei. Association for Computational Linguistics.
Cite (Informal):
Efficient and Robust Knowledge Graph Construction (Zhang et al., AACL-IJCNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.aacl-tutorials.1.pdf