Maitreya Patel


2024

pdf bib
Precision or Recall? An Analysis of Image Captions for Training Text-to-Image Generation Model
Sheng Cheng | Maitreya Patel | Yezhou Yang
Findings of the Association for Computational Linguistics: EMNLP 2024

Despite advancements in text-to-image models, generating images that precisely align with textual descriptions remains challenging due to misalignment in training data. In this paper, we analyze the critical role of caption precision and recall in text-to-image model training. Our analysis of human-annotated captions shows that both precision and recall are important for text-image alignment, but precision has a more significant impact. Leveraging these insights, we utilize Large Vision Language Models to generate synthetic captions for training. Models trained with these synthetic captions show similar behavior to those trained on human-annotated captions, underscores the potential for synthetic data in text-to-image training.

2022

pdf bib
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Yizhong Wang | Swaroop Mishra | Pegah Alipoormolabashi | Yeganeh Kordi | Amirreza Mirzaei | Atharva Naik | Arjun Ashok | Arut Selvan Dhanasekaran | Anjana Arunkumar | David Stap | Eshaan Pathak | Giannis Karamanolakis | Haizhi Lai | Ishan Purohit | Ishani Mondal | Jacob Anderson | Kirby Kuznia | Krima Doshi | Kuntal Kumar Pal | Maitreya Patel | Mehrad Moradshahi | Mihir Parmar | Mirali Purohit | Neeraj Varshney | Phani Rohitha Kaza | Pulkit Verma | Ravsehaj Singh Puri | Rushang Karia | Savan Doshi | Shailaja Keyur Sampat | Siddhartha Mishra | Sujan Reddy A | Sumanta Patro | Tanay Dixit | Xudong Shen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

How well can NLP models generalize to a variety of unseen tasks when provided with task instructions? To address this question, we first introduce Super-NaturalInstructions, a benchmark of 1,616 diverse NLP tasks and their expert-written instructions. Our collection covers 76 distinct task types, including but not limited to classification, extraction, infilling, sequence tagging, text rewriting, and text composition. This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions—training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones.Furthermore, we build Tk-Instruct, a transformer model trained to follow a variety of in-context instructions (plain language task definitions or k-shot examples). Our experiments show that Tk-Instruct outperforms existing instruction-following models such as InstructGPT by over 9% on our benchmark despite being an order of magnitude smaller. We further analyze generalization as a function of various scaling parameters, such as the number of observed tasks, the number of instances per task, and model sizes. We hope our dataset and model facilitate future progress towards more general-purpose NLP models.

pdf bib
CRIPP-VQA: Counterfactual Reasoning about Implicit Physical Properties via Video Question Answering
Maitreya Patel | Tejas Gokhale | Chitta Baral | Yezhou Yang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Videos often capture objects, their visible properties, their motion, and the interactions between different objects. Objects also have physical properties such as mass, which the imaging pipeline is unable to directly capture. However, these properties can be estimated by utilizing cues from relative object motion and the dynamics introduced by collisions. In this paper, we introduce CRIPP-VQA, a new video question answering dataset for reasoning about the implicit physical properties of objects in a scene. CRIPP-VQA contains videos of objects in motion, annotated with questions that involve counterfactual reasoning about the effect of actions, questions about planning in order to reach a goal, and descriptive questions about visible properties of objects. The CRIPP-VQA test set enables evaluation under several out-of-distribution settings – videos with objects with masses, coefficients of friction, and initial velocities that are not observed in the training distribution. Our experiments reveal a surprising and significant performance gap in terms of answering questions about implicit properties (the focus of this paper) and explicit properties of objects (the focus of prior work).