2024
pdf
bib
abs
SpanCS:面向跨语言代码生成的片段级语码转换(SpanCS: Span-Level Code-Switching for Cross-Lingual Code Generation)
Zhu Qingfu (朱庆福)
|
Zhou Shiqi (周士祺)
|
Wang Shuo (王硕)
|
Zhang Zhiming (张致铭)
|
Wang Haoyu (王昊钰)
|
Chen Qiguang (陈麒光)
|
Che Wanxiang (车万翔)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“跨语言代码生成旨在将英语到代码的生成能力迁移至其他自然语言。翻译-训 练(Translate-Train)和语码转换(Code-Switching)是实现跨语言迁移的两类经典数据增广方法,两者优势互补但尚未有效结合。为此,本文提出了一种面向跨语言代码生成的片段级语码转换(SpanCS)方法。首先,该方法利用语码转换框架关联源语言上下文与目标语言片段,以促进多种语言的交互和对齐。其次,该方法利用翻译-训练方法从完整的源语言翻译中提取目标语言片段,以保证增广数据与原始数据间的语义一致性。为了公平地评价多种自然语言之间代码生成的性能差异,本文通过人工翻译与校验,基于HumanEval构建了包含10种自然语言的多语言代码生成评测基MHumanEval。该基准上的三个主干模型的实验结果表明,SpanCS在跨语言代码生成任务上一致优于前人的数据增广方法。”
pdf
bib
abs
中文图文多模态理解评测
Wang Yuxuan (王宇轩)
|
Liu Yijun (刘议骏)
|
Wan Zhiguo (万志国)
|
Che Wanxiang (车万翔)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 3: Evaluations)
“中文图文多模态理解评测任务旨在从多角度评价中文图文多模态预训练模型的图文多模态建模和理解能力。本任务共包括五个子任务:图片检索、文本检索、视觉问答、视觉定位和视觉对话,最终成绩根据这五个任务的得分综合计算。本文首先介绍了任务的背景和动机,然后从任务介绍、评价指标、比赛结果、参赛方法等方面介绍并展示了本次评测任务的相关信息。本次任务共有11支队伍报名参赛,其中3支队伍提交了结果。”
2023
pdf
bib
abs
FinBART: A Pre-trained Seq2seq Language Model for Chinese Financial Tasks
Dong Hongyuan
|
Che Wanxiang
|
He Xiaoyu
|
Zheng Guidong
|
Wen Junjie
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
“Pretrained language models are making a more profound impact on our lives than ever before. They exhibit promising performance on a variety of general domain Natural Language Process-ing (NLP) tasks. However, few work focuses on Chinese financial NLP tasks, which comprisea significant portion of social communication. To this end, we propose FinBART, a pretrainedseq2seq language model for Chinese financial communication tasks. Experiments show thatFinBART outperforms baseline models on a series of downstream tasks including text classifica-tion, sequence labeling and text generation. We further pretrain the model on customer servicecorpora, and results show that our model outperforms baseline models and achieves promisingperformance on various real world customer service text mining tasks.”