Yanqiu Shao


2024

pdf bib
Enhancing Discourse Dependency Parsing with Sentence Dependency Parsing: A Unified Generative Method Based on Code Representation
Zizhuo Shen | Yanqiu Shao | Wei Li
Findings of the Association for Computational Linguistics: EMNLP 2024

Due to the high complexity of Discourse Dependency Parsing (DDP) tasks, their existing annotation resources are relatively scarce compared to other NLP tasks, and different DDP tasks also have significant differences in annotation schema. These issues have led to the dilemma of low resources for DDP tasks. Thanks to the powerful capabilities of Large Language Models (LLMs) in cross-task learning, we can use LLMs to model dependency parsing under different annotation schema in an unified manner, in order to alleviate the dilemma of low resources for DDP tasks. However, enabling LLMs to deeply comprehend dependency parsing tasks is a challenge that remains underexplored. Inspired by the application of code-based methods in complex tasks, we propose a code-based unified dependency parsing method. We treat the process of dependency parsing as a search process of dependency paths and use code to represent this search process. Furthermore, we use a curriculum-learning based instruction tuning strategy for joint training of multiple dependency parsing tasks. The experimental results show that our proposed code-based DDP system has achieved good performance on two Chinese DDP tasks (especially significant improvement on the DDP task with relatively less training data).

pdf bib
An Unsupervised Framework for Adaptive Context-aware Simplified-Traditional Chinese Character Conversion
Wei Li | Shutan Huang | Yanqiu Shao
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Traditional Chinese character is an important carrier of Chinese culture, and is still actively used in many areas. Automatic conversion between traditional and simplified Chinese characters can help modern people understand traditional culture and facilitate communication among different regions. Previous conversion methods rely on rule-based mapping or shallow feature-based machine learning models, which struggle to convert simplified characters with different origins and constructing training data is costly. In this study, we propose an unsupervised adaptive context-aware conversion model that learns to convert between simplified and traditional Chinese characters under a denoising auto-encoder framework requiring no labeled data. Our model includes a Latent Generative Adversarial Encoder that transforms vectors to a latent space with generative adversarial network, which adds noise as an inevitable side effect, Based on which a Context-aware Semantic Reconstruction Decoder restores the original input while considering a broader range of context with a pretrained language model. Additionally, we propose to apply early exit mechanism during inference to reduce the computation complexity and improve the generalization ability. To test the effectiveness of our model, we construct a high quality test dataset with simplified-traditional Chinese character text pairs. Experiment results and extensive analysis demonstrate that our model outperforms strong unsupervised baselines and yields better conversion result for one-to-many cases.

2022

pdf bib
Unsupervised Chinese Word Segmentation with BERT Oriented Probing and Transformation
Wei Li | Yuhan Song | Qi Su | Yanqiu Shao
Findings of the Association for Computational Linguistics: ACL 2022

Word Segmentation is a fundamental step for understanding Chinese language. Previous neural approaches for unsupervised Chinese Word Segmentation (CWS) only exploits shallow semantic information, which can miss important context. Large scale Pre-trained language models (PLM) have achieved great success in many areas because of its ability to capture the deep contextual semantic relation. In this paper, we propose to take advantage of the deep semantic information embedded in PLM (e.g., BERT) with a self-training manner, which iteratively probes and transforms the semantic information in PLM into explicit word segmentation ability. Extensive experiment results show that our proposed approach achieves state-of-the-art F1 score on two CWS benchmark datasets.

pdf bib
《二十四史》古代汉语语义依存图库构建(Construction of Semantic Dependency Graph Bank of Ancient Chinese in twenty four histories)
Tian Huang (黄恬) | Yanqiu Shao (邵艳秋) | Wei Li (李炜)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“语义依存图是NLP处理语义的深层分析方法,能够对句子中词与词之间的语义进行分析。该文针对古代汉语特点,在制定古代汉语语义依存图标注规范的基础上,以《二十四史》为语料来源,完成标注了规模为3000句的古代汉语语义依存图库,标注一致性的kappa值为78.83%。通过与现代汉语语义依存图库的对比,对依存图库基本情况进行统计,分析古代汉语的语义特色和规律。统计显示,古代汉语语义分布宏观上符合齐普夫定律,在语义事件描述上具有强烈的历史性叙事和正式文体特征,如以人物纪传为中心,时间、地点等周边角色描述细致,叙事语言冷静客观,缺少描述情态、语气、程度、时间状态等的修饰词语等。 "

pdf bib
针对古代经典文献的引用查找问题的数据构建与匹配方法(Data Construction and Matching Method for the Task of Ancient Classics Reference Detection)
Wei Li (李炜) | Yanqiu Shao (邵艳秋) | Mengxi Bi (毕梦曦)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“中国古代思想家的思想建构往往建立在对更早期经典的创造性诠释中,将这些诠释中包含的引用查找出来对思想史研究意义重大。但一些体量较大的文献如果完全依靠手工标记引用将耗费大量时间与人力成本,因此找到一种自动化的方法辅助专家进行引用标记查找非常重要。以预训练语言模型为代表的自然语言处理技术的发展提升了计算机对于文本处理和语义理解的能力。据此,本文提出多种利用专家知识或深度学习语义理解能力的无监督基线方法来自动查找古代思想家著作中对早期经典的引用。为了验证本文提出的方法的效果并推动自然语言处理技术在数字人文领域的应用,本文以宋代具有重大影响力的理学家二程(程颢、程颐)对早期儒家经典的引用为例进行研究,并构建和发布相应的引用查找数据集1。实验结果表明本文提出的基于预训练语言模型和对比学习目标的复合方法可以较为准确地判断是否存在引用关系。基于短句的引用探测ROC-AUC值达到了87.83,基于段落的引用探测ROC-AUC值达到了91.02。进一步的分析表明本文的方法不仅有利于自动化找到引用关系,更能够有效帮助专家提高引用查找判断效率。本方法在注释整理、文本溯源、重出文献查找、引用统计分析、索引文献集制作等方面具有广阔的应用前景。”

pdf bib
基于强化学习的古今汉语句子对齐研究(Research on Sentence Alignment of Ancient and Modern Chinese based on Reinforcement Learning)
Kuai Yu (喻快) | Yanqiu Shao (邵艳秋) | Wei Li (李炜)
Proceedings of the 21st Chinese National Conference on Computational Linguistics

“基于深度学习的有监督机器翻译取得了良好的效果,但训练过程中需要大量质量较高的对齐语料。对于中文古今翻译场景,高质量的平行语料并不多,而粗对齐的篇章、段语料比较容易获得,因此语料对齐很有研究价值和研究必要。在传统双语平行语料的句子对齐研究中,传统方法根据双语文本中的长度、词汇、共现文字等语法信息,建立一个综合评判标准来衡量两个句对之间相似度。此类方法虽然在单句对齐上取得了较好的效果,但是对于句子语义匹配的能力有限,并且在一些多对多的对齐模式上的性能表现不佳。在本文中我们提出尝试利用现在发展迅速且具有强大语义表示能力的预训练语言模型来考虑双语的语义信息,但是单独使用预训练语言模型只能考虑相对局部的信息,因此我们提出采用基于动态规划算法的强化学习训练目标来整合段落全局信息,并且进行无监督训练。实验结果证明我们提出的方法训练得到的模型性能优于此前获得最好表现的基线模型,尤其相较于传统模型难以处理的多对多对齐模式下,性能提升较大。”

2021

pdf bib
基于数据选择和局部伪标注的跨语义依存分析研究(Selection and Pseudo Partial Annotationy)
Dazhan Mao (毛达展) | Kuai Yu (喻快) | Yanqiu Shao (邵艳秋)
Proceedings of the 20th Chinese National Conference on Computational Linguistics

语义依存分析要走向实用,模型从单领域迁移到其他领域的领域适应能力至关重要。近年来,对抗学习针对领域适应这个任务取得了较好的效果,但对目标领域的无标注数据利用效率并不高。本文采用Self-training这种半监督学习方法,充分发挥无标注数据的潜能,弥补对抗学习方法的不足。但传统的Self-training效率和性能并不好,为此本文针对跨领域语义依存分析这个任务,尝试了强化学习数据选择器,提出了局部伪标注的标注策略,实验结果证明我们提出的模型优于基线模型。

2020

pdf bib
半监督跨领域语义依存分析技术研究(Semi-supervised Domain Adaptation for Semantic Dependency Parsing)
Dazhan Mao (毛达展) | Huayong Li (李华勇) | Yanqiu Shao (邵艳秋)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

近年来,尽管深度学习给语义依存分析带来了长足的进步,但由于语义依存分析数据标注代价非常高昂,并且在单领域上性能较好的依存分析器迁移到其他领域时,其性能会大幅度下降。因此为了使其走向实用,就必须解决领域适应问题。本文提出一个新的基于对抗学习的领域适应依存分析模型,我们提出了基于对抗学习的共享双编码器结构,并引入领域私有辅助任务和正交约束,同时也探究了多种预训练模型在跨领域依存分析任务上的效果和性能。

pdf bib
Semantic-aware Chinese Zero Pronoun Resolution with Pre-trained Semantic Dependency Parser
Lanqiu Zhang | Zizhuo Shen | Yanqiu Shao
Proceedings of the 19th Chinese National Conference on Computational Linguistics

Deep learning-based Chinese zero pronoun resolution model has achieved better performance than traditional machine learning-based model. However, the existing work related to Chinese zero pronoun resolution has not yet well integrated linguistic information into the deep learningbased Chinese zero pronoun resolution model. This paper adopts the idea based on the pre-trained model, and integrates the semantic representations in the pre-trained Chinese semantic dependency graph parser into the Chinese zero pronoun resolution model. The experimental results on OntoNotes-5.0 dataset show that our proposed Chinese zero pronoun resolution model with pretrained Chinese semantic dependency parser improves the F-score by 0.4% compared with our baseline model, and obtains better results than other deep learning-based Chinese zero pronoun resolution models. In addition, we integrate the BERT representations into our model so that the performance of our model was improved by 0.7% compared with our baseline model.

2016

pdf bib
SemEval-2016 Task 9: Chinese Semantic Dependency Parsing
Wanxiang Che | Yanqiu Shao | Ting Liu | Yu Ding
Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016)

2015

pdf bib
Construction of Semantic Collocation Bank Based on Semantic Dependency Parsing
Shijun Liu | Yanqiu Shao | Yu Ding | Lijuan Zheng
Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation: Posters

2014

pdf bib
Jointly or Separately: Which is Better for Parsing Heterogeneous Dependencies?
Meishan Zhang | Wanxiang Che | Yanqiu Shao | Ting Liu
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

2012

pdf bib
SemEval-2012 Task 5: Chinese Semantic Dependency Parsing
Wanxiang Che | Meishan Zhang | Yanqiu Shao | Ting Liu
*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)