Jake Zhao
2024
Fast Adaptation via Prompted Data: An Efficient Cross-Domain Fine-tuning Method for Large Language Models
Yiming Zhang
|
Hantao Yang
|
Haobo Wang
|
Jake Zhao
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Large language models (LLMs) have achieved great success in a variety of natural language understanding tasks. However, domain discrepancies between the downstream task and the pre-training corpora may have hurdled LLMs to excel further in the vertical applications. Contrary to prior computational-heavy methods, we propose a lightweight solution to further bridge the gap in applying LLMs to diverse downstream tasks — a Fast Adaptation method for LLMs via Prompted Data, in short FAvPD. Notably, with FAvPD, we establish an additional adaptive tuning procedure, wherein we integrate downstream text corpora, gold labels as well as external knowledge sources and then envelop them into a form of highly controllable prompt. As a simple, easy-to-use, and versatile solution, FAvPD lies in the intersection of regimes like knowledge-augmented LLMs, fine-tuning, and adaptation techniques. With extensive experiments, we prove that FAvPD excels in both performance efficacy and training efficiency over related prior works. FAvPD is publicly available at https://github.com/Hyatio/FAvPD.
2022
Distill The Image to Nowhere: Inversion Knowledge Distillation for Multimodal Machine Translation
Ru Peng
|
Yawen Zeng
|
Jake Zhao
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Past works on multimodal machine translation (MMT) elevate bilingual setup by incorporating additional aligned vision information.However, an image-must requirement of the multimodal dataset largely hinders MMT’s development — namely that it demands an aligned form of [image, source text, target text].This limitation is generally troublesome during the inference phase especially when the aligned image is not provided as in the normal NMT setup.Thus, in this work, we introduce IKD-MMT, a novel MMT framework to support the image-free inference phase via an inversion knowledge distillation scheme.In particular, a multimodal feature generator is executed with a knowledge distillation module, which directly generates the multimodal feature from (only) source texts as the input.While there have been a few prior works entertaining the possibility to support image-free inference for machine translation, their performances have yet to rival the image-must translation.In our experiments, we identify our method as the first image-free approach to comprehensively rival or even surpass (almost) all image-must frameworks, and achieved the state-of-the-art result on the often-used Multi30k benchmark. Our code and data are availableat: https://github.com/pengr/IKD-mmt/tree/master..
Search