Ziyan Yang
2024
PropTest: Automatic Property Testing for Improved Visual Programming
Jaywon Koo
|
Ziyan Yang
|
Paola Cascante-Bonilla
|
Baishakhi Ray
|
Vicente Ordonez
Findings of the Association for Computational Linguistics: EMNLP 2024
Visual Programming has recently emerged as an alternative to end-to-end black-box visual reasoning models. This type of method leverages Large Language Models (LLMs) to generate the source code for an executable computer program that solves a given problem. This strategy has the advantage of offering an interpretable reasoning path and does not require finetuning a model with task-specific data. We propose PropTest, a general strategy that improves visual programming by further using an LLM to generate code that tests for visual properties in an initial round of proposed solutions. Our method generates tests for data-type consistency, output syntax, and semantic properties. PropTest achieves comparable results to state-of-the-art methods while using publicly available LLMs. This is demonstrated across different benchmarks on visual question answering and referring expression comprehension. Particularly, PropTest improves ViperGPT by obtaining 46.1% accuracy (+6.0%) on GQA using Llama3-8B and 59.5% (+8.1%) on RefCOCO+ using CodeLlama-34B.
2020
Using Visual Feature Space as a Pivot Across Languages
Ziyan Yang
|
Leticia Pinto-Alva
|
Franck Dernoncourt
|
Vicente Ordonez
Findings of the Association for Computational Linguistics: EMNLP 2020
Our work aims to leverage visual feature space to pass information across languages. We show that models trained to generate textual captions in more than one language conditioned on an input image can leverage their jointly trained feature space during inference to pivot across languages. We particularly demonstrate improved quality on a caption generated from an input image, by leveraging a caption in a second language. More importantly, we demonstrate that even without conditioning on any visual input, the model demonstrates to have learned implicitly to perform to some extent machine translation from one language to another in their shared visual feature space. We show results in German-English, and Japanese-English language pairs that pave the way for using the visual world to learn a common representation for language.
Search
Co-authors
- Vicente Ordonez 2
- Jaywon Koo 1
- Paola Cascante-Bonilla 1
- Baishakhi Ray 1
- Leticia Pinto-Alva 1
- show all...