预训练语言模型中的知识分析、萃取与增强(Knowledge Analysis, Extraction and Enhancement inPre-trained Language Models)

Chen Yubo (玉博 陈), Cao Pengfei (鹏飞 曹), Wang Chenhao (晨皓 王), Li Jiachun (嘉淳 李), Liu Kang (康 刘), Zhao Jun (军 赵)


Abstract
“近年来,大规模预训练语言模型在知识密集型的自然语言处理任务上取得了令人瞩目的进步。这似乎表明,预训练语言模型能够自发地从语料中学习大量知识,并隐式地保存在参数之中。然而,这一现象的背后机理仍然萦绕着许多谜团,语言模型究竟掌握了哪些知识,如何提取和利用这些知识,如何用外部知识弥补模型不足,这些问题都亟待进一步探索。在本次讲习班中,我们将重点介绍在预训练语言模型知识分析、知识萃取、知识增强等领域的近期研究进展。”
Anthology ID:
2023.ccl-4.1
Volume:
Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 4: Tutorial Abstracts)
Month:
August
Year:
2023
Address:
Harbin, China
Editors:
Maosong Sun, Bing Qin, Xipeng Qiu, Jing Jiang, Xianpei Han
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
1–8
Language:
Chinese
URL:
https://aclanthology.org/2023.ccl-4.1
DOI:
Bibkey:
Cite (ACL):
Chen Yubo, Cao Pengfei, Wang Chenhao, Li Jiachun, Liu Kang, and Zhao Jun. 2023. 预训练语言模型中的知识分析、萃取与增强(Knowledge Analysis, Extraction and Enhancement inPre-trained Language Models). In Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 4: Tutorial Abstracts), pages 1–8, Harbin, China. Chinese Information Processing Society of China.
Cite (Informal):
预训练语言模型中的知识分析、萃取与增强(Knowledge Analysis, Extraction and Enhancement inPre-trained Language Models) (Yubo et al., CCL 2023)
Copy Citation:
PDF:
https://aclanthology.org/2023.ccl-4.1.pdf