Jaywon Koo
2024
PropTest: Automatic Property Testing for Improved Visual Programming
Jaywon Koo
|
Ziyan Yang
|
Paola Cascante-Bonilla
|
Baishakhi Ray
|
Vicente Ordonez
Findings of the Association for Computational Linguistics: EMNLP 2024
Visual Programming has recently emerged as an alternative to end-to-end black-box visual reasoning models. This type of method leverages Large Language Models (LLMs) to generate the source code for an executable computer program that solves a given problem. This strategy has the advantage of offering an interpretable reasoning path and does not require finetuning a model with task-specific data. We propose PropTest, a general strategy that improves visual programming by further using an LLM to generate code that tests for visual properties in an initial round of proposed solutions. Our method generates tests for data-type consistency, output syntax, and semantic properties. PropTest achieves comparable results to state-of-the-art methods while using publicly available LLMs. This is demonstrated across different benchmarks on visual question answering and referring expression comprehension. Particularly, PropTest improves ViperGPT by obtaining 46.1% accuracy (+6.0%) on GQA using Llama3-8B and 59.5% (+8.1%) on RefCOCO+ using CodeLlama-34B.
Multimodal Multi-loss Fusion Network for Sentiment Analysis
Zehui Wu
|
Ziwei Gong
|
Jaywon Koo
|
Julia Hirschberg
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
This paper investigates the optimal selection and fusion of feature encoders across multiple modalities and combines these in one neural network to improve sentiment detection. We compare different fusion methods and examine the impact of multi-loss training within the multi-modality fusion network, identifying surprisingly important findings relating to subnet performance. We have also found that integrating context significantly enhances model performance. Our best model achieves state-of-the-art performance for three datasets (CMU-MOSI, CMU-MOSEI and CH-SIMS). These results suggest a roadmap toward an optimized feature selection and fusion approach for enhancing sentiment detection in neural networks.
Search
Co-authors
- Ziyan Yang 1
- Paola Cascante-Bonilla 1
- Baishakhi Ray 1
- Vicente Ordonez 1
- Zehui Wu 1
- show all...