Chandan Gautam


2024

pdf bib
Class Name Guided Out-of-Scope Intent Classification
Chandan Gautam | Sethupathy Parameswaran | Aditya Kane | Yuan Fang | Savitha Ramasamy | Suresh Sundaram | Sunil Kumar Sahu | Xiaoli Li
Findings of the Association for Computational Linguistics: EMNLP 2024

2022

pdf bib
Refinement Matters: Textual Description Needs to be Refined for Zero-shot Learning
Chandan Gautam | Sethupathy Parameswaran | Vinay Verma | Suresh Sundaram | Savitha Ramasamy
Findings of the Association for Computational Linguistics: EMNLP 2022

Zero-Shot Learning (ZSL) has shown great promise at the intersection of vision and language, and generative methods for ZSL are predominant owing to their efficiency. Moreover, textual description or attribute plays a critical role in transferring knowledge from the seen to unseen classes in ZSL. Such generative approaches for ZSL are very costly to train and require the class description of the unseen classes during training. In this work, we propose a non-generative gating-based attribute refinement network for ZSL, which achieves similar accuracies to generative methods of ZSL, at a much lower computational cost. The refined attributes are mapped into the visual domain through an attribute embedder, and the whole network is guided by the circle loss and the well-known softmax cross-entropy loss to obtain a robust class embedding. We refer to our approach as Circle loss guided gating-based Attribute-Refinement Network (CARNet). We perform extensive experiments on the five benchmark datasets over the various challenging scenarios viz., Generalized ZSL (GZSL), Continual GZSL (CGZSL), and conventional ZSL. We observe that the CARNet significantly outperforms recent non-generative ZSL methods and most generative ZSL methods in all three settings by a significant margin. Our extensive ablation study disentangles the performance of various components and justifies their importance. The source code is available at https://github.com/Sethup123/CARNet.