Siyu Tian
Also published as: 思雨 田
2024
银瞳:基于自适应语义空间学习的中文金融多任务大模型(SilverSight: A Multi-Task Chinese Financial Large Language Model Based on Adaptive Semantic Space Learning)
Yuhang Zhou (周宇航)
|
Zeping Li (李泽平)
|
Siyu Tian (田思雨)
|
Yuchen Ni (倪雨琛)
|
Jian Zhang (张健)
|
Xiang Liu (刘响)
|
Guangnan Ye (叶广楠)
|
Jie Wu (吴杰)
|
Hongfeng Chai (柴洪峰)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“大语言模型正逐渐被用于各种垂直领域,利用其广泛的知识储备来赋能领域中的多种场景。然而,各领域拥有多种待学习的特定任务,且多源异构的领域数据容易引发模型进行任务迁移时的冲突。基于此,本研究提出自适应语义空间学习框架,利用对语义空间内数据的自适应重分布,提升多专家模型的性能及选择效果,并基于此框架训练了一个金融多任务大模型“银瞳”。研究结果表明,我们的框架只需利用10%的数据就能达到接近全数据训练的效果,并拥有较强的泛化表现。”
R3-NL2GQL: A Model Coordination and Knowledge Graph Alignment Approach for NL2GQL
Yuhang Zhou
|
Yu He
|
Siyu Tian
|
Yuchen Ni
|
Zhangyue Yin
|
Xiang Liu
|
Chuanjun Ji
|
Sen Liu
|
Xipeng Qiu
|
Guangnan Ye
|
Hongfeng Chai
Findings of the Association for Computational Linguistics: EMNLP 2024
While current tasks of converting natural language to SQL (NL2SQL) using Foundation Models have shown impressive achievements, adapting these approaches for converting natural language to Graph Query Language (NL2GQL) encounters hurdles due to the distinct nature of GQL compared to SQL, alongside the diverse forms of GQL. Moving away from traditional rule-based and slot-filling methodologies, we introduce a novel approach, R3-NL2GQL, integrating both small and large Foundation Models for ranking, rewriting, and refining tasks. This method leverages the interpretative strengths of smaller models for initial ranking and rewriting stages, while capitalizing on the superior generalization and query generation prowess of larger models for the final transformation of natural language queries into GQL formats. Addressing the scarcity of datasets in this emerging field, we have developed a bilingual dataset, sourced from graph database manuals and selected open-source Knowledge Graphs (KGs). Our evaluation of this methodology on this dataset demonstrates its promising efficacy and robustness.