2024
pdf
bib
abs
AutoRG:一种大小模型协同的自动报告生成框架(AutoRG: An automatic report generation framework for Large and small model collaboration)
Jing Zhang (张京)
|
Jiangming Shu (舒江明)
|
Yuxiang Zhang (张宇翔)
|
Bin Wu (吴斌)
|
Wei Wang (王巍)
|
Jian Yu (于剑)
|
Jitao Sang (桑基韬)
Proceedings of the 23rd Chinese National Conference on Computational Linguistics (Volume 1: Main Conference)
“自动报告生成技术在提高工作效率和节约人力资源方面具有显著潜力。大语言模型的出现使得报告流畅度与可解释性得到提升。然而,现有工作仍依赖人工,缺乏灵活性和丰富度。同时,小模型错误或冗余的输出与大模型自身的随机性会导致报告质量不稳定。本文提出大小模型协同的自动报告生成框架AutoRG,通过大模型的工具理解与规划能力减少人工干预,提升报告丰富度,并通过信息修正与报告迭代机制提高报告的稳定性。本文以自动专利报告生成为场景,从多个维度对AutoRG进行全面测试。结果表明,该框架在提高报告生成的丰富度和质量稳定性方面具有显著优势。”
pdf
bib
abs
Noisy Multi-Label Text Classification via Instance-Label Pair Correction
Pengyu Xu
|
Mingyang Song
|
Linkaida Liu
|
Bing Liu
|
Hongjian Sun
|
Liping Jing
|
Jian Yu
Findings of the Association for Computational Linguistics: NAACL 2024
In noisy label learning, instance selection based on small-loss criteria has been proven to be highly effective. However, in the case of noisy multi-label text classification (NMLTC), the presence of noise is not limited to the instance-level but extends to the (instance-label) pair-level.This gives rise to two main challenges.(1) The loss information at the pair-level fails to capture the variations between instances. (2) There are two types of noise at the pair-level: false positives and false negatives. Identifying false negatives from a large pool of negative pairs presents an exceedingly difficult task. To tackle these issues, we propose a novel approach called instance-label pair correction (iLaCo), which aims to address the problem of noisy pair selection and correction in NMLTC tasks.Specifically, we first introduce a holistic selection metric that identifies noisy pairs by simultaneously considering global loss information and instance-specific ranking information.Secondly, we employ a filter guided by label correlation to focus exclusively on negative pairs with label relevance. This filter significantly reduces the difficulty of identifying false negatives.Experimental analysis indicates that our framework effectively corrects noisy pairs in NMLTC datasets, leading to a significant improvement in model performance.
pdf
bib
abs
Enhancing Multi-Label Text Classification under Label-Dependent Noise: A Label-Specific Denoising Framework
Pengyu Xu
|
Liping Jing
|
Jian Yu
Findings of the Association for Computational Linguistics: EMNLP 2024
Recent advancements in noisy multi-label text classification have primarily relied on the class-conditional noise (CCN) assumption, which treats each label independently undergoing label flipping to generate noisy labels. However, in real-world scenarios, noisy labels often exhibit dependencies with true labels. In this study, we validate through hypothesis testing that real-world datasets are unlikely to adhere to the CCN assumption, indicating that label noise is dependent on the labels. To address this, we introduce a label-specific denoising framework designed to counteract label-dependent noise. The framework initially presents a holistic selection metric that evaluates noisy labels by concurrently considering loss information, ranking information, and feature centroid. Subsequently, it identifies and corrects noisy labels individually for each label category in a fine-grained manner. Extensive experiments on benchmark datasets demonstrate the effectiveness of our method under both synthetic and real-world noise conditions, significantly improving performance over existing state-of-the-art models.
2022
pdf
bib
abs
FGraDA: A Dataset and Benchmark for Fine-Grained Domain Adaptation in Machine Translation
Wenhao Zhu
|
Shujian Huang
|
Tong Pu
|
Pingxuan Huang
|
Xu Zhang
|
Jian Yu
|
Wei Chen
|
Yanfeng Wang
|
Jiajun Chen
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Previous research for adapting a general neural machine translation (NMT) model into a specific domain usually neglects the diversity in translation within the same domain, which is a core problem for domain adaptation in real-world scenarios. One representative of such challenging scenarios is to deploy a translation system for a conference with a specific topic, e.g., global warming or coronavirus, where there are usually extremely less resources due to the limited schedule. To motivate wider investigation in such a scenario, we present a real-world fine-grained domain adaptation task in machine translation (FGraDA). The FGraDA dataset consists of Chinese-English translation task for four sub-domains of information technology: autonomous vehicles, AI education, real-time networks, and smart phone. Each sub-domain is equipped with a development set and test set for evaluation purposes. To be closer to reality, FGraDA does not employ any in-domain bilingual training data but provides bilingual dictionaries and wiki knowledge base, which can be easier obtained within a short time. We benchmark the fine-grained domain adaptation task and present in-depth analyses showing that there are still challenging problems to further improve the performance with heterogeneous resources.