Jiaxin Ge
2024
Natural Language Embedded Programs for Hybrid Language Symbolic Reasoning
Tianhua Zhang
|
Jiaxin Ge
|
Hongyin Luo
|
Yung-Sung Chuang
|
Mingye Gao
|
Yuan Gong
|
Yoon Kim
|
Xixin Wu
|
Helen Meng
|
James Glass
Findings of the Association for Computational Linguistics: NAACL 2024
How can we perform computations over natural language representations to solve tasks that require symbolic and numeric reasoning? We propose natural language embedded programs (NLEP) as a unifying framework for addressing math/symbolic reasoning, natural language understanding, and instruction following tasks. Our approach prompts a language model to generate full Python programs that define functions over data structures which contain natural language representations of structured knowledge. A Python interpreter then executes the generated code and prints the output. Despite using a task-general prompt, we find that this approach can improve upon strong baselines across a range of different tasks including math and symbolic reasoning, text classification, question answering, and instruction following. We found that the generated programs are interpretable since they outline the exact reasoning process followed by the program interpreter.
2023
From Wrong To Right: A Recursive Approach Towards Vision-Language Explanation
Jiaxin Ge
|
Sanjay Subramanian
|
Trevor Darrell
|
Boyi Li
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Addressing the challenge of adapting pre-trained vision-language models for generating insightful explanations for visual reasoning tasks with limited annotations, we present ReVisE: a Recursive Visual Explanation algorithm. Our method iteratively computes visual features (conditioned on the text input), an answer, and an explanation, to improve the explanation quality step by step until the answer converges. We find that this multi-step approach guides the model to correct its own answers and outperforms single-step explanation generation. Furthermore, explanations generated by ReVisE also serve as valuable annotations for few-shot self-training. Our approach outperforms previous methods while utilizing merely 5% of the human-annotated explanations across 10 metrics, demonstrating up to a 4.2 and 1.3 increase in BLEU-1 score on the VCR and VQA-X datasets, underscoring the efficacy and data-efficiency of our method.
Entailment as Robust Self-Learner
Jiaxin Ge
|
Hongyin Luo
|
Yoon Kim
|
James Glass
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Entailment has been recognized as an important metric for evaluating natural language understanding (NLU) models, and recent studies have found that entailment pretraining benefits weakly supervised fine-tuning. In this work, we design a prompting strategy that formulates a number of different NLU tasks as contextual entailment. This approach improves the zero-shot adaptation of pretrained entailment models. Secondly, we notice that self-training entailment-based models with unlabeled data can significantly improve the adaptation performance on downstream tasks. To achieve more stable improvement, we propose the Simple Pseudo-Label Editing (SimPLE) algorithm for better pseudo-labeling quality in self-training. We also found that both pretrained entailment-based models and the self-trained models are robust against adversarial evaluation data. Experiments on binary and multi-class classification tasks show that SimPLE leads to more robust self-training results, indicating that the self-trained entailment models are more efficient and trustworthy than large language models on language understanding tasks.
Search
Co-authors
- Hongyin Luo 2
- Yoon Kim 2
- James Glass 2
- Tianhua Zhang 1
- Yung-Sung Chuang 1
- show all...