Shijing Chen
2025
Leveraging Taxonomy and LLMs for Improved Multimodal Hierarchical Classification
Shijing Chen
|
Mohamed Reda Bouadjenek
|
Usman Naseem
|
Basem Suleiman
|
Shoaib Jameel
|
Flora Salim
|
Hakim Hacid
|
Imran Razzak
Proceedings of the 31st International Conference on Computational Linguistics
Multi-level Hierarchical Classification (MLHC) tackles the challenge of categorizing items within a complex, multi-layered class structure. However, traditional MLHC classifiers often rely on a backbone model with n independent output layers, which tend to ignore the hierarchical relationships between classes. This oversight can lead to inconsistent predictions that violate the underlying taxonomy. Leveraging Large Language Models (LLMs), we propose novel taxonomy-embedded transitional LLM-agnostic framework for multimodality classification. The cornerstone of this advancement is the ability of models to enforce consistency across hierarchical levels. Our evaluations on the MEP-3M dataset - a Multi-modal E-commerce Product dataset with various hierarchical levels- demonstrated a significant performance improvement compared to conventional LLMs structure.
2023
Debunking Biases in Attention
Shijing Chen
|
Usman Naseem
|
Imran Razzak
Proceedings of the 3rd Workshop on Trustworthy Natural Language Processing (TrustNLP 2023)
Despite the remarkable performances in various applications, machine learning (ML) models could potentially discriminate. They may result in biasness in decision-making, leading to an impact negatively on individuals and society. Recently, various methods have been developed to mitigate biasness and achieve significant performance. Attention mechanisms are a fundamental component of many state-of-the-art ML models and may potentially impact the fairness of ML models. However, how they explicitly influence fairness has yet to be thoroughly explored. In this paper, we investigate how different attention mechanisms affect the fairness of ML models, focusing on models used in Natural Language Processing (NLP) models. We evaluate the performance of fairness of several models with and without different attention mechanisms on widely used benchmark datasets. Our results indicate that the majority of attention mechanisms that have been assessed can improve the fairness performance of Bidirectional Gated Recurrent Unit (BiGRU) and Bidirectional Long Short-Term Memory (BiLSTM) in all three datasets regarding religious and gender-sensitive groups, however, with varying degrees of trade-offs in accuracy measures. Our findings highlight the possibility of fairness being affected by adopting specific attention mechanisms in machine learning models for certain datasets
Search
Fix data
Co-authors
- Usman Naseem 2
- Imran Razzak 2
- Mohamed Reda Bouadjenek 1
- Hakim Hacid 1
- Shoaib Jameel 1
- show all...