Michal Shlapentokh-Rothman


2025

pdf bib
Visual Program Distillation with Template-Based Augmentation
Michal Shlapentokh-Rothman | Yu-Xiong Wang | Derek Hoiem
Findings of the Association for Computational Linguistics: EMNLP 2025

Adapting visual programming or prompting large language models (LLMs) to generate executable code for visual tasks like visual question answering (VQA) for specialized tasks or domains remains challenging due to high annotation and inference costs. We propose a low-cost visual program distillation method that can be used for models with at most 1 billion parameters and requires no human-generated program annotations. We achieve this through synthetic data augmentation based on decoupling programs into higher-level skills, called templates, and their corresponding arguments. Experimental results show that, with a relatively small amount of question/answer data, small language models can generate high-quality specialized visual programs with the added benefit of much faster inference.

2024

pdf bib
WebWISE: Unlocking Web Interface Control for LLMs via Sequential Exploration
Heyi Tao | Sethuraman T V | Michal Shlapentokh-Rothman | Tanmay Gupta | Heng Ji | Derek Hoiem
Findings of the Association for Computational Linguistics: NAACL 2024

This paper investigates using Large Language Models (LLMs) to automatically perform web software tasks using click, scroll, and text in- put operations. Previous approaches, such as reinforcement learning (RL) or imitation learning, are inefficient to train and task-specific. Our method uses filtered Document Object Model (DOM) elements as observations and performs tasks step-by-step, sequentially generating small programs based on the current observations. We use in-context learning, either benefiting from a single manually provided example, or an automatically generated example based on a successful zero-shot trial. We evaluate our proposed method on the MiniWob++ benchmark. With only one in-context example, our WebWISE method using gpt-3.5-turbo achieves similar or better performance than other methods that require many demonstrations or trials.