ZeroPrompt: Scaling Prompt-Based Pretraining to 1,000 Tasks Improves Zero-Shot Generalization Hanwei Xu author Yujun Chen author Yulun Du author Nan Shao author Wang Yanggang author Haiyu Li author Zhilin Yang author 2022-12 text Findings of the Association for Computational Linguistics: EMNLP 2022 Yoav Goldberg editor Zornitsa Kozareva editor Yue Zhang editor Association for Computational Linguistics Abu Dhabi, United Arab Emirates conference publication xu-etal-2022-zeroprompt 10.18653/v1/2022.findings-emnlp.312 https://aclanthology.org/2022.findings-emnlp.312/ 2022-12 4235 4252