Surgan Jandial
2026
Do GUI Grounders Truly Understand UI Elements?
Surgan Jandial | Yinheng Li | Justin Wagle | Kazuhito Koishida
Findings of the Association for Computational Linguistics: EACL 2026
Surgan Jandial | Yinheng Li | Justin Wagle | Kazuhito Koishida
Findings of the Association for Computational Linguistics: EACL 2026
Graphical User Interface (GUI) grounding is critical for effective GUI agents. Despite recent progress, key challenges remain: 1) existing grounding models and benchmarks are skewed toward web and mobile environments, neglecting desktop interfaces (especially windows); and 2) grounding capability is assessed using accuracy on a single "best" instruction per UI element. However, users can refer to a UI element in diverse valid ways – via visual attributes, spatial relations, etc, and a capable grounding model should produce consistent outputs across such variations. Focusing on desktop environments, we introduce GUI Grounding Sensitivity Benchmark, which investigates the model sensitivity to multiple descriptions of the same UI element. We design an automatic pipeline to generate multiple valid instructions per UI element, and develop nuanced data validation methods, as frontier models even hallucinate to produce a single instruction. Evaluation of 12 models reveals they are reasonably sensitive and their performance on existing benchmarks does not reflect their true ability. Building on the insight that a given grounding model struggles more with certain instructions or relations, we introduce the GUI Grounding Diagnosis Agent, which generates challenging instructions using model feedback and iterative refinement. Our agent reports high success rate (upto 84%) in generating instructions that fail the state-of-the-art GUI grounding models.
2025
On the Fine-Grained Planning Abilities of VLM Web Agents
Surgan Jandial | Yinong Oliver Wang | Andrea Bajcsy | Fernando De la Torre
Findings of the Association for Computational Linguistics: EMNLP 2025
Surgan Jandial | Yinong Oliver Wang | Andrea Bajcsy | Fernando De la Torre
Findings of the Association for Computational Linguistics: EMNLP 2025
Vision-Language Models (VLMs) have shown promise as web agents, yet their planning—the ability to devise strategies or action sequences to complete tasks—remains understudied. While prior works focus on VLM’s perception and overall success rates (i.e., goal completion), fine-grained investigation of their planning has been overlooked. To address this gap, we examine VLMs’ capability to (1) understand temporal relationships within web contexts, and (2) assess plans of actions across diverse scenarios. We design four simple yet effective tests to delve into these nuanced aspects around planning. Our results across nineteen VLMs reveal that these models exhibit limited performance in the aforementioned skills and are not reliable to function as web agents. To facilitate future work, we release our planning evaluations and data, providing a foundation for advancing the future research in this area.
2024
“Thinking” Fair and Slow: On the Efficacy of Structured Prompts for Debiasing Language Models
Shaz Furniturewala | Surgan Jandial | Abhinav Java | Pragyan Banerjee | Simra Shahid | Sumit Bhatia | Kokil Jaidka
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Shaz Furniturewala | Surgan Jandial | Abhinav Java | Pragyan Banerjee | Simra Shahid | Sumit Bhatia | Kokil Jaidka
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Existing debiasing techniques are typically training-based or require access to the model’s internals and output distributions, so they are inaccessible to end-users looking to adapt LLM outputs for their particular needs. In this study, we examine whether structured prompting techniques can offer opportunities for fair text generation. We evaluate a comprehensive end-user-focused iterative framework of debiasing that applies System 2 thinking processes for prompts to induce logical, reflective, and critical text generation, with single, multi-step, instruction, and role-based variants. By systematically evaluating many LLMs across many datasets and different prompting strategies, we show that the more complex System 2-based Implicative Prompts significantly improve over other techniques demonstrating lower mean bias in the outputs with competitive performance on the downstream tasks. Our work offers research directions for the design and the potential of end-user-focused evaluative frameworks for LLM use.