Zeyu Wang
2025
Conditional Semantic Textual Similarity via Conditional Contrastive Learning
Xinyue Liu | Zeyang Qin | Zeyu Wang | Wenxin Liang | Linlin Zong | Bo Xu
Proceedings of the 31st International Conference on Computational Linguistics
Xinyue Liu | Zeyang Qin | Zeyu Wang | Wenxin Liang | Linlin Zong | Bo Xu
Proceedings of the 31st International Conference on Computational Linguistics
Conditional semantic textual similarity (C-STS) assesses the similarity between pairs of sentence representations under different conditions. The current method encounters the over-estimation issue of positive and negative samples. Specifically, the similarity within positive samples is excessively high, while that within negative samples is excessively low. In this paper, we focus on the C-STS task and develop a conditional contrastive learning framework that constructs positive and negative samples from two perspectives, achieving the following primary objectives: (1) adaptive selection of the optimization direction for positive and negative samples to solve the over-estimation problem, (2) fully balance of the effects of hard and false negative samples. We validate the proposed method with five models based on bi-encoder and tri-encoder architectures, the results show that our proposed method achieves state-of-the-art performance. The code is available at https://github.com/qinzeyang0919/CCL.
CCL25-Eval任务10系统报告:基于动态线索增强提示与多阶段渐进优化的中文仇恨言论检测方法
LuRuan LuRuan | ZhaiBo ZhaiBo | Lei Zhang | Lie Bao | Zeyu Wang | Feng Wei | Chenzi Wang
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
LuRuan LuRuan | ZhaiBo ZhaiBo | Lei Zhang | Lie Bao | Zeyu Wang | Feng Wei | Chenzi Wang
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"随着社交媒体的迅速普及,用户生成内容呈指数级增长,同时也助长了仇恨言论的扩散。因此,有效检测仇恨言论已成为自然语言处理研究领域的一项关键挑战。为推动中文仇恨言论检测技术的发展,本文提出了一种新颖的大语言模型微调框架,该框架融合了动态线索增强提示和多阶段渐进优化方法。所提出的方法将复杂的细粒度仇恨言论识别任务分解为两个相辅相成的子任务:仇恨倾向分类和仇恨信息提取。为此采用了两种专门的训练策略:动态线索增强提示微调(DCA-SFT)用于优化模型的分类性能,而动态线索增强强化学习(DCA-RL)则用于提升模型的信息提取能力。具体而言,在DCA-SFT阶段,引入判别式分类并采用多标签独热(Multi-Hot)编码作为输出表示形式,以提高模型的多类别分类准确率。在DCA-RL阶段,通过知识蒸馏的方式,将闭源大语言模型在执行仇恨信息提取任务时的思维链(CoT)知识迁移至小参数模型,同时引入基于规则奖励的强化微调策略来增强小参数模型在信息提取任务中的逻辑推理能力。实验结果证明了该方法的有效性,在CCL25-Eval任务10的初赛排行榜上以0.3864的F1值,排名第二;在决赛排行榜上以0.3591的F1值,位列第三。"
2024
HiGen: Hierarchy-Aware Sequence Generation for Hierarchical Text Classification
Vidit Jain | Mukund Rungta | Yuchen Zhuang | Yue Yu | Zeyu Wang | Mu Gao | Jeffrey Skolnick | Chao Zhang
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Vidit Jain | Mukund Rungta | Yuchen Zhuang | Yue Yu | Zeyu Wang | Mu Gao | Jeffrey Skolnick | Chao Zhang
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Hierarchical text classification (HTC) is a complex subtask under multi-label text classification, characterized by a hierarchical label taxonomy and data imbalance. The best-performing models aim to learn a static representation by combining document and hierarchical label information. However, the relevance of document sections can vary based on the hierarchy level, necessitating a dynamic document representation. To address this, we propose HiGen, a text-generation-based framework utilizing language models to encode dynamic text representations. We introduce a level-guided loss function to capture the relationship between text and label name semantics. Our approach incorporates a task-specific pretraining strategy, adapting the language model to in-domain knowledge and significantly enhancing performance for classes with limited examples. Furthermore, we present a new and valuable dataset called ENZYME, designed for HTC, which comprises articles from PubMed with the goal of predicting Enzyme Commission (EC) numbers. Through extensive experiments on the ENZYME dataset and the widely recognized WOS and NYT datasets, our methodology demonstrates superior performance, surpassing existing approaches while efficiently handling data and mitigating class imbalance. We release our code and dataset here: https://github.com/viditjain99/HiGen.
CausalBench: A Comprehensive Benchmark for Evaluating Causal Reasoning Capabilities of Large Language Models
Zeyu Wang
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)
Zeyu Wang
Proceedings of the 10th SIGHAN Workshop on Chinese Language Processing (SIGHAN-10)
Causal reasoning, a core aspect of human cognition, is essential for advancing large language models (LLMs) towards artificial general intelligence (AGI) and reducing their propensity for generating hallucinations. However, existing datasets for evaluating causal reasoning in LLMs are limited by narrow domain coverage and a focus on cause-to-effect reasoning through textual problems, which does not comprehensively assess whether LLMs truly grasp causal relationships or merely guess correct answers. To address these shortcomings, we introduce a novel benchmark that spans textual, mathematical, and coding problem domains. Each problem is crafted to probe causal understanding from four perspectives: cause-to-effect, effect-to-cause, cause-to-effect with intervention, and effect-to-cause with intervention. This multi-dimensional evaluation method ensures that LLMs must exhibit a genuine understanding of causal structures by correctly answering questions across all four dimensions, mitigating the possibility of correct responses by chance. Furthermore, our benchmark explores the relationship between an LLM’s causal reasoning performance and its tendency to produce hallucinations. We present evaluations of state-of-the-art LLMs using our benchmark, providing valuable insights into their current causal reasoning capabilities across diverse domains. The dataset is publicly available for download at https://huggingface.co/datasets/CCLV/CausalBench
2022
Detecting Urgency in Multilingual Medical SMS in Kenya
Narshion Ngao | Zeyu Wang | Lawrence Nderu | Tobias Mwalili | Tal August | Keshet Ronen
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop
Narshion Ngao | Zeyu Wang | Lawrence Nderu | Tobias Mwalili | Tal August | Keshet Ronen
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing: Student Research Workshop
Access to mobile phones in many low- and middle-income countries has increased exponentially over the last 20 years, providing an opportunity to connect patients with healthcare interventions through mobile phones (known as mobile health). A barrier to large-scale implementation of interactive mobile health interventions is the human effort needed to manage participant messages. In this study, we explore the use of natural language processing to improve healthcare workers’ management of messages from pregnant and postpartum women in Kenya. Using multilingual, low-resource language text messages from the Mobile solutions for Women and Children’s health (Mobile WACh NEO) study, we developed models to assess urgency of incoming messages. We evaluated models using a novel approach that focuses on clinical usefulness in either triaging or prioritizing messages. Our best-performing models did not reach the threshold for clinical usefulness we set, but have the potential to improve nurse workflow and responsiveness to urgent messages.
Whose Language Counts as High Quality? Measuring Language Ideologies in Text Data Selection
Suchin Gururangan | Dallas Card | Sarah Dreier | Emily Gade | Leroy Wang | Zeyu Wang | Luke Zettlemoyer | Noah A. Smith
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Suchin Gururangan | Dallas Card | Sarah Dreier | Emily Gade | Leroy Wang | Zeyu Wang | Luke Zettlemoyer | Noah A. Smith
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Language models increasingly rely on massive web crawls for diverse text data. However, these sources are rife with undesirable content. As such, resources like Wikipedia, books, and news often serve as anchors for automatically selecting web text most suitable for language modeling, a process typically referred to as quality filtering. Using a new dataset of U.S. high school newspaper articles—written by students from across the country—we investigate whose language is preferred by the quality filter used for GPT-3. We find that newspapers from larger schools, located in wealthier, educated, and urban zones (ZIP codes) are more likely to be classified as high quality. We also show that this quality measurement is unaligned with other sensible metrics, such as factuality or literary acclaim. We argue that privileging any corpus as high quality entails a language ideology, and more care is needed to construct training corpora for language models, with better transparency and justification for the inclusion or exclusion of various texts.
Search
Fix author
Co-authors
- Tal August 1
- Lie Bao 1
- Dallas Card 1
- Sarah Dreier 1
- Emily Gade 1
- Mu Gao 1
- Suchin Gururangan 1
- Vidit Jain 1
- Wenxin Liang 1
- Xinyue Liu 1
- LuRuan LuRuan 1
- Tobias Mwalili 1
- Lawrence Nderu 1
- Narshion Ngao 1
- Zeyang Qin 1
- Keshet Ronen 1
- Mukund Rungta 1
- Jeffrey Skolnick 1
- Noah A. Smith 1
- Leroy Wang 1
- Chenzi Wang 1
- Feng Wei 1
- Bo Xu 1
- Yue Yu 1
- Luke Zettlemoyer 1
- ZhaiBo ZhaiBo 1
- Chao Zhang 1
- Lei Zhang 1
- Yuchen Zhuang 1
- Linlin Zong 1