Manuj Malik
2024
An Empirical Analysis of the Writing Styles of Persona-Assigned LLMs
Manuj Malik
|
Jing Jiang
|
Kian Ming A. Chai
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Evaluating LLMs’ Mathematical Reasoning in Financial Document Question Answering
Pragya Srivastava
|
Manuj Malik
|
Vivek Gupta
|
Tanuja Ganu
|
Dan Roth
Findings of the Association for Computational Linguistics: ACL 2024
Large Language Models (LLMs), excel in natural language understanding, but their capability for complex mathematical reasoning with a hybrid of structured tables and unstructured text remain uncertain. This study explores LLMs’ mathematical reasoning on four financial tabular question-answering datasets: TATQA, FinQA, ConvFinQA, and Multihiertt. Through extensive experiments with various models and prompting techniques, we assess how LLMs adapt to complex tables and mathematical tasks. We focus on sensitivity to table complexity and performance variations with an increasing number of arithmetic reasoning steps. The results provide insights into LLMs’ capabilities and limitations in handling complex mathematical scenarios for semi-structured tables. Ultimately, we introduce a novel prompting technique EEDP tailored to semi-structured documents, matching or outperforming baselines performance while providing a nuanced understanding of LLMs abilities.
2022
Controlling for Stereotypes in Multimodal Language Model Evaluation
Manuj Malik
|
Richard Johansson
Proceedings of the Fifth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP
We propose a methodology and design two benchmark sets for measuring to what extent language-and-vision language models use the visual signal in the presence or absence of stereotypes. The first benchmark is designed to test for stereotypical colors of common objects, while the second benchmark considers gender stereotypes. The key idea is to compare predictions when the image conforms to the stereotype to predictions when it does not. Our results show that there is significant variation among multimodal models: the recent Transformer-based FLAVA seems to be more sensitive to the choice of image and less affected by stereotypes than older CNN-based models such as VisualBERT and LXMERT. This effect is more discernible in this type of controlled setting than in traditional evaluations where we do not know whether the model relied on the stereotype or the visual signal.
Search
Co-authors
- Jing Jiang 1
- Kian Ming A. Chai 1
- Richard Johansson 1
- Pragya Srivastava 1
- Vivek Gupta 1
- show all...