Siyu Tian
Also published as: 思雨 田
2025
CMT-Eval: A Novel Chinese Multi-turn Dialogue Evaluation Dataset Addressing Real-world Conversational Challenges
Siyu Tian | Kaijie Mo | Yupei Wang | Renfen Hu
Findings of the Association for Computational Linguistics: EMNLP 2025
Siyu Tian | Kaijie Mo | Yupei Wang | Renfen Hu
Findings of the Association for Computational Linguistics: EMNLP 2025
Multi-turn dialogue is a key paradigm for interaction between users and Large Language Models (LLMs). However, existing evaluation benchmarks fail to capture users’ evolving needs and how their diverse conversation styles affect the dialogue flow. To address these limitations, we propose CMT-Eval, the first dedicated dataset for fine-grained evaluation of Chinese multi-turn dialogue systems. Built upon a linguistic theory-driven Speech Act Framework, diverse user personas, and varied conversational challenges, CMT-Eval comprises 596 high-quality dialogues with 4,431 turns, simulating realistic, multifaceted, and challenging conversations. Experiments reveal that models struggle with specific speech acts, user personas, and complex scenarios, highlighting the effectiveness of CMT-Eval in assessing LLMs’ multi-turn dialogue capabilities and providing valuable insights for their enhancement. The dataset, code, and prompts are available at https://github.com/hejaida/CMT-Eval.
2024
银瞳:基于自适应语义空间学习的中文金融多任务大模型(SilverSight: A Multi-Task Chinese Financial Large Language Model Based on Adaptive Semantic Space Learning)
Yuhang Zhou (周宇航) | Zeping Li (李泽平) | Siyu Tian (思雨 田) | Yuchen Ni (倪雨琛) | Jian Zhang (张健) | Xiang Liu (刘响) | Guangnan Ye (叶广楠) | Jie Wu (吴杰) | Hongfeng Chai (柴洪峰)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
Yuhang Zhou (周宇航) | Zeping Li (李泽平) | Siyu Tian (思雨 田) | Yuchen Ni (倪雨琛) | Jian Zhang (张健) | Xiang Liu (刘响) | Guangnan Ye (叶广楠) | Jie Wu (吴杰) | Hongfeng Chai (柴洪峰)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“大语言模型正逐渐被用于各种垂直领域,利用其广泛的知识储备来赋能领域中的多种场景。然而,各领域拥有多种待学习的特定任务,且多源异构的领域数据容易引发模型进行任务迁移时的冲突。基于此,本研究提出自适应语义空间学习框架,利用对语义空间内数据的自适应重分布,提升多专家模型的性能及选择效果,并基于此框架训练了一个金融多任务大模型“银瞳”。研究结果表明,我们的框架只需利用10%的数据就能达到接近全数据训练的效果,并拥有较强的泛化表现。”
R3-NL2GQL: A Model Coordination and Knowledge Graph Alignment Approach for NL2GQL
Yuhang Zhou | Yu He | Siyu Tian | Yuchen Ni | Zhangyue Yin | Xiang Liu | Chuanjun Ji | Sen Liu | Xipeng Qiu | Guangnan Ye | Hongfeng Chai
Findings of the Association for Computational Linguistics: EMNLP 2024
Yuhang Zhou | Yu He | Siyu Tian | Yuchen Ni | Zhangyue Yin | Xiang Liu | Chuanjun Ji | Sen Liu | Xipeng Qiu | Guangnan Ye | Hongfeng Chai
Findings of the Association for Computational Linguistics: EMNLP 2024
While current tasks of converting natural language to SQL (NL2SQL) using Foundation Models have shown impressive achievements, adapting these approaches for converting natural language to Graph Query Language (NL2GQL) encounters hurdles due to the distinct nature of GQL compared to SQL, alongside the diverse forms of GQL. Moving away from traditional rule-based and slot-filling methodologies, we introduce a novel approach, R3-NL2GQL, integrating both small and large Foundation Models for ranking, rewriting, and refining tasks. This method leverages the interpretative strengths of smaller models for initial ranking and rewriting stages, while capitalizing on the superior generalization and query generation prowess of larger models for the final transformation of natural language queries into GQL formats. Addressing the scarcity of datasets in this emerging field, we have developed a bilingual dataset, sourced from graph database manuals and selected open-source Knowledge Graphs (KGs). Our evaluation of this methodology on this dataset demonstrates its promising efficacy and robustness.
2023
基于结构树库的补语位形容词语义分析及搭配库构建∗(Semantic analysis of complementary adjectives and construction of collocation database based on structural tree library)
Siyu Tian (思雨 田) | Tian Shao (邵田) | Endong Xun (荀恩东) | Gaoqi Rao (饶高琦)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
Siyu Tian (思雨 田) | Tian Shao (邵田) | Endong Xun (荀恩东) | Gaoqi Rao (饶高琦)
Proceedings of the 22nd Chinese National Conference on Computational Linguistics
“在形容词充当补语的粘合式述补结构1中,通常以两个谓词性成分连用(”形容词+形容词”、“动词+形容词”)的形式出现,由于这一结构没有形式标记,为计算机自动识别该结构带来了较大的难度,同时,形容词充当补语并不是其最基本、典型(作定语、谓语)的用法,在语言学界与计算语言学界也没有受到足够的关注。因此,该文以补语位的形容词为研究对象,从大规模的句法结构树库中抽取形容词直接作补语的述补结构,并通过编程和人工校验的方式对语料进行降噪,对补语位形容词进行穷尽式检索,得到补语位形容词词表,进一步对补语位形容词的语义进行细分类,构建相应的语义搭配库。不仅可以提升句法切分的正确率,为深层句法语义分析提供语义信息,也可以为语言学本体的相关研究提供参考。”