Fangkai Jiao


2023

pdf bib
Retrieving Multimodal Information for Augmented Generation: A Survey
Ruochen Zhao | Hailin Chen | Weishi Wang | Fangkai Jiao | Xuan Long Do | Chengwei Qin | Bosheng Ding | Xiaobao Guo | Minzhi Li | Xingxuan Li | Shafiq Joty
Findings of the Association for Computational Linguistics: EMNLP 2023

As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs’ generation ability, which enables LLMs to better interact with the world. However, there lacks a unified perception of at which stage and how to incorporate different modalities. In this survey, we review methods that assist and augment generative models by retrieving multimodal knowledge, whose formats range from images, codes, tables, graphs, to audio. Such methods offer a promising solution to important concerns such as factuality, reasoning, interpretability, and robustness. By providing an in-depth review, this survey is expected to provide scholars with a deeper understanding of the methods’ applications and encourage them to adapt existing techniques to the fast-growing field of LLMs.

2022

pdf bib
MERIt: Meta-Path Guided Contrastive Learning for Logical Reasoning
Fangkai Jiao | Yangyang Guo | Xuemeng Song | Liqiang Nie
Findings of the Association for Computational Linguistics: ACL 2022

Logical reasoning is of vital importance to natural language understanding. Previous studies either employ graph-based models to incorporate prior knowledge about logical relations, or introduce symbolic logic into neural models through data augmentation. These methods, however, heavily depend on annotated training data, and thus suffer from over-fitting and poor generalization problems due to the dataset sparsity. To address these two problems, in this paper, we propose MERIt, a MEta-path guided contrastive learning method for logical ReasonIng of text, to perform self-supervised pre-training on abundant unlabeled text data. Two novel strategies serve as indispensable components of our method. In particular, a strategy based on meta-path is devised to discover the logical structure in natural texts, followed by a counterfactual data augmentation strategy to eliminate the information shortcut induced by pre-training. The experimental results on two challenging logical reasoning benchmarks, i.e., ReClor and LogiQA, demonstrate that our method outperforms the SOTA baselines with significant improvements.

2021

pdf bib
REPT: Bridging Language Models and Machine Reading Comprehension via Retrieval-Based Pre-training
Fangkai Jiao | Yangyang Guo | Yilin Niu | Feng Ji | Feng-Lin Li | Liqiang Nie
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
A Self-Training Method for Machine Reading Comprehension with Soft Evidence Extraction
Yilin Niu | Fangkai Jiao | Mantong Zhou | Ting Yao | Jingfang Xu | Minlie Huang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Neural models have achieved great success on machine reading comprehension (MRC), many of which typically consist of two components: an evidence extractor and an answer predictor. The former seeks the most relevant information from a reference text, while the latter is to locate or generate answers from the extracted evidence. Despite the importance of evidence labels for training the evidence extractor, they are not cheaply accessible, particularly in many non-extractive MRC tasks such as YES/NO question answering and multi-choice MRC. To address this problem, we present a Self-Training method (STM), which supervises the evidence extractor with auto-generated evidence labels in an iterative process. At each iteration, a base MRC model is trained with golden answers and noisy evidence labels. The trained model will predict pseudo evidence labels as extra supervision in the next iteration. We evaluate STM on seven datasets over three MRC tasks. Experimental results demonstrate the improvement on existing MRC models, and we also analyze how and why such a self-training method works in MRC.