Proceedings of the 22nd Chinese National Conference on Computational Linguistics (Volume 4: Tutorial Abstracts)

Maosong Sun, Bing Qin, Xipeng Qiu, Jing Jiang, Xianpei Han (Editors)


Anthology ID:
2023.ccl-4
Month:
August
Year:
2023
Address:
Harbin, China
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
URL:
https://aclanthology.org/2023.ccl-4
DOI:
Bib Export formats:
BibTeX MODS XML EndNote
PDF:
https://aclanthology.org/2023.ccl-4.pdf

pdf bib
预训练语言模型中的知识分析、萃取与增强(Knowledge Analysis, Extraction and Enhancement inPre-trained Language Models)
Chen Yubo (玉博 陈) | Cao Pengfei (鹏飞 曹) | Wang Chenhao (晨皓 王) | Li Jiachun (嘉淳 李) | Liu Kang (康 刘) | Zhao Jun (军 赵)

“近年来,大规模预训练语言模型在知识密集型的自然语言处理任务上取得了令人瞩目的进步。这似乎表明,预训练语言模型能够自发地从语料中学习大量知识,并隐式地保存在参数之中。然而,这一现象的背后机理仍然萦绕着许多谜团,语言模型究竟掌握了哪些知识,如何提取和利用这些知识,如何用外部知识弥补模型不足,这些问题都亟待进一步探索。在本次讲习班中,我们将重点介绍在预训练语言模型知识分析、知识萃取、知识增强等领域的近期研究进展。”

pdf bib
Safety and Ethical Concerns of Large Language Models
Xi Zhiheng | Zheng Rui | Gui Tao

“Recent months have witnessed significant progress in the field of large language models (LLMs).Represented by ChatGPT and GPT-4, LLMs perform well in various natural language process-ing tasks and have been applied to many downstream applications to facilitate people’s lives. However, there still exist safety and ethical concerns. Specifically, LLMs suffer from social bias,robustness problems, and poisoning issues, all of which may induce LLMs to spew harmful con-tents. We propose this tutorial as a gentle introduction to the safety and ethical issues of LLMs.”

pdf bib
Studying Language Processing in the Human Brain with Speech and Language Models
Zhang Chao | Thwaites Andrew | Wingfield Cai

“Speech and language computational models have been instrumental in advancing Artificial In-telligence in recent years. However, it remains an open question whether the human brain isemploying similar approaches to these models. This tutorial aims to provide an accessible intro-duction to the extensive research on this topic, specifically focusing on studies that seek to es-tablish quantitative correlations between neuroimaging data from human subjects and the outputof language models or automatic speech recognition systems. The tutorial covers various aspectsof this research, including a brief overview of brain-computer interfaces and neuroscience, com-mon techniques for data processing and pattern analysis, and representative research examples. Finally, the tutorial addresses the main limitations and technical challenges encountered in thisfield, as well as the relationship between brain mechanism research and brain-inspired artificialintelligence.”

pdf bib
Foundation Models for Robotics: Best Known Practices
Xu Shaocong | Zhao Hao

“Artificial general intelligence (AGI) used to be a sci-fi word but recently the surprising general-ization capability of foundation models have triggered a lot of attention to AGI, in both academiaand industry. Large language models can now answer questions or chat with human beings,using fluent sentences and clear reasoning. Diffusion models can now draw pictures of unprece-dented photo-realism, according to human commands and controls. Researchers have also madesubstantial efforts to explore new possibilities for robotics applications with the help of founda-tion models. Since this interdisciplinary field is still under fast development, there is no clearmethodological conclusions for now. In this tutorial, I will briefly go through best known prac-tices that have shown transformative capabilities in several sub-fields. Specifically, there are fiverepresentative paradigms: (1) Using foundation models to allow human-friendly human-car in-teraction; (2) Using foundation models to equip robots the capabilities of understanding vaguehuman needs; (3) Using foundation models to break down complex tasks into achievable sub-tasks; (4) Using foundation models to composite skill primitives so that reinforcement learningcan work with sparse rewards; (5) Using foundation models to bridge languge commands andlow-level control dynamics. I hope these best known practices to inspire NLP researchers.”