Chang Wang


2024

pdf bib
LI4: Label-Infused Iterative Information Interacting Based Fact Verification in Question-answering Dialogue
Xiaocheng Zhang | Chang Wang | Guoping Zhao | Xiaohong Su
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)

Fact verification constitutes a pivotal application in the effort to combat the dissemination of disinformation, a concern that has recently garnered considerable attention. However, previous studies in the field of fact verification, particularly those focused on question-answering dialogue, have exhibited limitations, such as failing to fully exploit the potential of question structures and ignoring relevant label information during the verification process. In this paper, we introduce Label-Infused Iterative Information Interacting (LI4), a novel approach designed for the task of question-answering dialogue based fact verification. LI4 consists of two meticulously designed components, namely the Iterative Information Refining and Filtering Module (IIRF) and the Fact Label Embedding Module (FLEM). The IIRF uses the Interactive Gating Mechanism to iteratively filter out the noise of question and evidence, concurrently refining the claim information. The FLEM is conceived to strengthen the understanding ability of the model towards labels by injecting label knowledge. We evaluate the performance of the proposed LI4 on HEALTHVER, FAVIQ, and COLLOQUIAL. The experimental results confirm that our LI4 model attains remarkable progress, manifesting as a new state-of-the-art performance.

pdf bib
WebCiteS: Attributed Query-Focused Summarization on Chinese Web Search Results with Citations
Haolin Deng | Chang Wang | Li Xin | Dezhang Yuan | Junlang Zhan | Tian Zhou | Jin Ma | Jun Gao | Ruifeng Xu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Enhancing the attribution in large language models (LLMs) is a crucial task. One feasible approach is to enable LLMs to cite external sources that support their generations. However, existing datasets and evaluation methods in this domain still exhibit notable limitations. In this work, we formulate the task of attributed query-focused summarization (AQFS) and present WebCiteS, a Chinese dataset featuring 7k human-annotated summaries with citations. WebCiteS derives from real-world user queries and web search results, offering a valuable resource for model training and evaluation. Prior works in attribution evaluation do not differentiate between groundedness errors and citation errors. They also fall short in automatically verifying sentences that draw partial support from multiple sources. We tackle these issues by developing detailed metrics and enabling the automatic evaluator to decompose the sentences into sub-claims for fine-grained verification. Our comprehensive evaluation of both open-source and proprietary models on WebCiteS highlights the challenge LLMs face in correctly citing sources, underscoring the necessity for further improvement. The dataset and code will be open-sourced to facilitate further research in this crucial field.

2014

pdf bib
Medical Relation Extraction with Manifold Models
Chang Wang | James Fan
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2013

pdf bib
Distant Supervision for Relation Extraction with an Incomplete Knowledge Base
Bonan Min | Ralph Grishman | Li Wan | Chang Wang | David Gondek
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2011

pdf bib
Relation Extraction with Relation Topics
Chang Wang | James Fan | Aditya Kalyanpur | David Gondek
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing