Tianqi Zhang
2024
The Earth is Flat because...: Investigating LLMs’ Belief towards Misinformation via Persuasive Conversation
Rongwu Xu
|
Brian Lin
|
Shujian Yang
|
Tianqi Zhang
|
Weiyan Shi
|
Tianwei Zhang
|
Zhixuan Fang
|
Wei Xu
|
Han Qiu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) encapsulate vast amounts of knowledge but still remain vulnerable to external misinformation. Existing research mainly studied this susceptibility behavior in a single-turn setting. However, belief can change during a multi-turn conversation, especially a persuasive one. Therefore, in this study, we delve into LLMs’ susceptibility to persuasive conversations, particularly on factual questions that they can answer correctly. We first curate the Farm (i.e., Fact to Misinform) dataset, which contains factual questions paired with systematically generated persuasive misinformation. Then, we develop a testing framework to track LLMs’ belief changes in a persuasive dialogue. Through extensive experiments, we find that LLMs’ correct beliefs on factual knowledge can be easily manipulated by various persuasive strategies.
2022
An Explainable Toolbox for Evaluating Pre-trained Vision-Language Models
Tiancheng Zhao
|
Tianqi Zhang
|
Mingwei Zhu
|
Haozhan Shen
|
Kyusong Lee
|
Xiaopeng Lu
|
Jianwei Yin
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
We introduce VL-CheckList, a toolbox for evaluating Vision-Language Pretraining (VLP) models, including the preliminary datasets that deepen the image-texting ability of a VLP model. Most existing VLP works evaluated their systems by comparing the fine-tuned downstream task performance. However, only average downstream task accuracy provides little information about the pros and cons of each VLP method. In this paper, we demonstrate how minor input changes in language and vision will affect the prediction outputs. Then, we describe the detailed user guidelines to utilize and contribute to the community. We show new findings on one of the representative VLP models to provide an example analysis. The data/code is available at https://github.com/om-ai-lab/VL-CheckList
Search
Co-authors
- Tiancheng Zhao 1
- Mingwei Zhu 1
- Haozhan Shen 1
- Kyusong Lee 1
- Xiaopeng Lu 1
- show all...