Jiachen Zhao


2024

pdf bib
Large Language Models are In-context Teachers for Knowledge Reasoning
Jiachen Zhao | Zonghai Yao | Zhichao Yang | Hong Yu
Findings of the Association for Computational Linguistics: EMNLP 2024

In this work, we study in-context teaching(ICT), where a teacher provides in-context example rationales to teach a student to reasonover unseen cases. Human teachers are usually required to craft in-context demonstrations, which are costly and have high variance. We ask whether a large language model (LLM) can serve as a more effective in-context teacher for itself or otherLLMs, compared to humans. Inspired by the Encoding Specificity Hypothesis from human episodic memory, we hypothesize thatin-context exemplars crafted by the teacher should match the training data of the student. This hypothesis motivates us to propose Self-Explain where an LLM’s self-elicited explanations are used as in-context demonstrations for prompting it as they are generalized fromthe model’s training examples. Self-Explain is shown to significantly outperform using human-crafted exemplars and other baselines.Furthermore, we reveal that for ICT, rationales from different teacher LLMs or human experts that more resemble the student LLM’s self-explanations are better in-context demonstrations. This supports our encoding specificity hypothesis. We then propose Teach-Back that aligns a teacher LLM with the student to enhance the ICT performance. For example, Teach-Back enables a 7B model to teach the much larger GPT-3.5 in context, surpassing human teachers by around 5% in test accuracy on medical question answering.

pdf bib
Multistage Collaborative Knowledge Distillation from a Large Language Model for Semi-Supervised Sequence Generation
Jiachen Zhao | Wenlong Zhao | Andrew Drozdov | Benjamin Rozonoyer | Md Arafat Sultan | Jay-Yoon Lee | Mohit Iyyer | Andrew McCallum
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We study semi-supervised sequence generation tasks, where the few labeled examples are too scarce to finetune a model, and meanwhile, few-shot prompted large language models (LLMs) exhibit room for improvement. In this paper, we present the discovery that a student model distilled from a few-shot prompted LLM can commonly generalize better than its teacher to unseen examples on such tasks. We find that the student is able to learn a general pattern from the high-quality pseudolabels produced by the teacher during knowledge distillation (KD), and favorably not a general pattern from the low-quality pseudolabels. Leveraging this discovery, we propose a new method, Multistage Collaborative Knowledge Distillation from an LLM (MCKD), for these tasks. MCKD first few-shot prompts an LLM to produce pseudolabels for unlabeled data. Then at each stage of an iterative KD process, a new pair of students is trained on disjoint partitions of the pseudolabeled data, and produces new and improved pseudolabels for their unseen partitions. We conduct extensive experiments on four syntactic and semantic parsing datasets and show the effectiveness of MCKD for low-resource semi-supervised sequence generation. On CRAFT biomedical parsing, for example, 3-stage MCKD with 50 labeled examples outperforms an LLM teacher and vanilla KD by 7.5% and 3.7% parsing F1, respectively, and matches the performance of supervised finetuning with 500 labeled examples.