2024
pdf
bib
abs
The State of the Art of Large Language Models on Chartered Financial Analyst Exams
Mahmoud Mahfouz
|
Ethan Callanan
|
Mathieu Sibue
|
Antony Papadimitriou
|
Zhiqiang Ma
|
Xiaomo Liu
|
Xiaodan Zhu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: Industry Track
The Chartered Financial Analyst (CFA) program is one of the most widely recognized financial certifications globally. In this work, we test a variety of state-of-the-art large language models (LLMs) on mock CFA exams to provide an overview of their financial analysis capabilities using the same evaluation standards applied for human professionals. We benchmark five leading proprietary models and eight open-source models on all three levels of the CFA through challenging multiple-choice and essay questions. We find that flagship proprietary models perform relatively well and can solidly pass levels I and II exams, but fail at level III due to essay questions. Open-source models generally fall short of estimated passing scores, but still show strong performance considering their size, cost, and availability advantages. We also find that using textbook data helps bridge the gap between open-source and proprietary models to a certain extent, despite reduced gains in CFA levels II and III. By understanding the current financial analysis abilities of LLMs, we aim to guide practitioners on which models are best suited for enhancing automation in the financial industry.
pdf
bib
abs
Fine-Tuning Language Models with Differential Privacy through Adaptive Noise Allocation
Xianzhi Li
|
Ran Zmigrod
|
Zhiqiang Ma
|
Xiaomo Liu
|
Xiaodan Zhu
Findings of the Association for Computational Linguistics: EMNLP 2024
Language models are capable of memorizing detailed patterns and information, leading to a double-edged effect: they achieve impressive modeling performance on downstream tasks with the stored knowledge but also raise significant privacy concerns. Traditional differential privacy based training approaches offer robust safeguards by employing a uniform noise distribution across all parameters. However, this overlooks the distinct sensitivities and contributions of individual parameters in privacy protection and often results in suboptimal models. To address these limitations, we propose ANADP, a novel algorithm that adaptively allocates additive noise based on the importance of model parameters. We demonstrate that ANADP narrows the performance gap between regular fine-tuning and traditional DP fine-tuning on a series of datasets while maintaining the required privacy constraints.
pdf
bib
abs
“What is the value of templates?” Rethinking Document Information Extraction Datasets for LLMs
Ran Zmigrod
|
Pranav Shetty
|
Mathieu Sibue
|
Zhiqiang Ma
|
Armineh Nourbakhsh
|
Xiaomo Liu
|
Manuela Veloso
Findings of the Association for Computational Linguistics: EMNLP 2024
The rise of large language models (LLMs) for visually rich document understanding (VRDU) has kindled a need for prompt-response, document-based datasets. As annotating new datasets from scratch is labor-intensive, the existing literature has generated prompt-response datasets from available resources using simple templates. For the case of key information extraction (KIE), one of the most common VRDU tasks, past work has typically employed the template “What is the value for the key?”. However, given the variety of questions encountered in the wild, simple and uniform templates are insufficient for creating robust models in research and industrial contexts. In this work, we present K2Q, a diverse collection of five datasets converted from KIE to a prompt-response format using a plethora of bespoke templates. The questions in K2Q can span multiple entities and be extractive or boolean. We empirically compare the performance of seven baseline generative models on K2Q with zero-shot prompting. We further compare three of these models when training on K2Q versus training on simpler templates to motivate the need of our work. We find that creating diverse and intricate KIE questions enhances the performance and robustness of VRDU models. We hope this work encourages future studies on data quality for generative model training.
pdf
bib
Proceedings of the Joint Workshop of the 7th Financial Technology and Natural Language Processing, the 5th Knowledge Discovery from Unstructured Data in Financial Services, and the 4th Workshop on Economics and Natural Language Processing
Chung-Chi Chen
|
Xiaomo Liu
|
Udo Hahn
|
Armineh Nourbakhsh
|
Zhiqiang Ma
|
Charese Smiley
|
Veronique Hoste
|
Sanjiv Ranjan Das
|
Manling Li
|
Mohammad Ghassemi
|
Hen-Hsen Huang
|
Hiroya Takamura
|
Hsin-Hsi Chen
Proceedings of the Joint Workshop of the 7th Financial Technology and Natural Language Processing, the 5th Knowledge Discovery from Unstructured Data in Financial Services, and the 4th Workshop on Economics and Natural Language Processing
pdf
bib
Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams
Ethan Callanan
|
Amarachi Mbakwe
|
Antony Papadimitriou
|
Yulong Pei
|
Mathieu Sibue
|
Xiaodan Zhu
|
Zhiqiang Ma
|
Xiaomo Liu
|
Sameena Shah
Proceedings of the Eighth Financial Technology and Natural Language Processing and the 1st Agent AI for Scenario Planning
pdf
bib
abs
DocLLM: A Layout-Aware Generative Language Model for Multimodal Document Understanding
Dongsheng Wang
|
Natraj Raman
|
Mathieu Sibue
|
Zhiqiang Ma
|
Petr Babkin
|
Simerjot Kaur
|
Yulong Pei
|
Armineh Nourbakhsh
|
Xiaomo Liu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Enterprise documents such as forms, receipts, reports, and other such records, often carry rich semantics at the intersection of textual and spatial modalities. The visual cues offered by their complex layouts play a crucial role in comprehending these documents effectively. In this paper, we present DocLLM, a lightweight extension to traditional large language models (LLMs) for reasoning over visual documents, taking into account both textual semantics and spatial layout. Our model differs from existing multimodal LLMs by avoiding expensive image encoders and focuses exclusively on bounding box information to incorporate the spatial layout structure. Specifically, the cross-alignment between text and spatial modalities is captured by decomposing the attention mechanism in classical transformers to a set of disentangled matrices. Furthermore, we devise a pre-training objective that learns to infill text segments. This approach allows us to address irregular layouts and heterogeneous content frequently encountered in visual documents. The pre-trained model is fine-tuned using a large-scale instruction dataset, covering four core document intelligence tasks. We demonstrate that our solution outperforms SotA LLMs on 14 out of 16 datasets across all tasks, and generalizes well to 4 out of 5 previously unseen datasets.
2023
pdf
bib
abs
Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? A Study on Several Typical Tasks
Xianzhi Li
|
Samuel Chan
|
Xiaodan Zhu
|
Yulong Pei
|
Zhiqiang Ma
|
Xiaomo Liu
|
Sameena Shah
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
The most recent large language models (LLMs) such as ChatGPT and GPT-4 have shown exceptional capabilities of generalist models, achieving state-of-the-art performance on a wide range of NLP tasks with little or no adaptation. How effective are such models in the finance domain? Understanding this basic question would have a significant impact on many downstream financial analytical tasks. In this paper, we conduct empirical studies and provide experimental evidences of their performance on a wide variety of financial text analytical problems, using eight benchmark datasets from five categories of tasks. We report both the strengths and limitations of the current models by comparing them to the state-of-the-art fine-tuned approaches and the recently released domain-specific pretrained models. We hope our study can help to understand the capability of the existing models in the financial domain and facilitate further improvements.
pdf
bib
abs
Unsupervised Domain Adaptation using Lexical Transformations and Label Injection for Twitter Data
Akshat Gupta
|
Xiaomo Liu
|
Sameena Shah
Proceedings of the 13th Workshop on Computational Approaches to Subjectivity, Sentiment, & Social Media Analysis
Domain adaptation is an important and widely studied problem in natural language processing. A large body of literature tries to solve this problem by adapting models trained on the source domain to the target domain. In this paper, we instead solve this problem from a dataset perspective. We modify the source domain dataset with simple lexical transformations to reduce the domain shift between the source dataset distribution and the target dataset distribution. We find that models trained on the transformed source domain dataset performs significantly better than zero-shot models. Using our proposed transformations to convert standard English to tweets, we reach an unsupervised part-of-speech (POS) tagging accuracy of 92.14% (from 81.54% zero shot accuracy), which is only slightly below the supervised performance of 94.45%. We also use our proposed transformations to synthetically generate tweets and augment the Twitter dataset to achieve state-of-the-art performance for POS tagging.
2022
pdf
bib
abs
TweetFinSent: A Dataset of Stock Sentiments on Twitter
Yulong Pei
|
Amarachi Mbakwe
|
Akshat Gupta
|
Salwa Alamir
|
Hanxuan Lin
|
Xiaomo Liu
|
Sameena Shah
Proceedings of the Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP)
Stock sentiment has strong correlations with the stock market but traditional sentiment analysis task classifies sentiment according to having feelings and emotions of good or bad. This definition of sentiment is not an accurate indicator of public opinion about specific stocks. To bridge this gap, we introduce a new task of stock sentiment analysis and present a new dataset for this task named TweetFinSent. In TweetFinSent, tweets are annotated based on if one gained or expected to gain positive or negative return from a stock. Experiments on TweetFinSent with several sentiment analysis models from lexicon-based to transformer-based have been conducted. Experimental results show that TweetFinSent dataset constitutes a challenging problem and there is ample room for improvement on the stock sentiment analysis task. TweetFinSent is available at
https://github.com/jpmcair/tweetfinsent.
pdf
bib
abs
AIR-JPMC@SMM4H’22: Classifying Self-Reported Intimate Partner Violence in Tweets with Multiple BERT-based Models
Alec Louis Candidato
|
Akshat Gupta
|
Xiaomo Liu
|
Sameena Shah
Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task
This paper presents our submission for the SMM4H 2022-Shared Task on the classification of self-reported intimate partner violence on Twitter (in English). The goal of this task was to accurately determine if the contents of a given tweet demonstrated someone reporting their own experience with intimate partner violence. The submitted system is an ensemble of five RoBERTa models each weighted by their respective F1-scores on the validation data-set. This system performed 13% better than the baseline and was the best performing system overall for this shared task.
pdf
bib
abs
AIR-JPMC@SMM4H’22: Identifying Self-Reported Spanish COVID-19 Symptom Tweets Through Multiple-Model Ensembling
Adrian Garcia Hernandez
|
Leung Wai Liu
|
Akshat Gupta
|
Vineeth Ravi
|
Saheed O. Obitayo
|
Xiaomo Liu
|
Sameena Shah
Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task
We present our response to Task 5 of the Social Media Mining for Health Applications (SMM4H) 2022 competition. We share our approach into classifying whether a tweet in Spanish about COVID-19 symptoms pertain to themselves, others, or not at all. Using a combination of BERT based models, we were able to achieve results that were higher than the median result of the competition.
pdf
bib
abs
AIR-JPMC@SMM4H’22: BERT + Ensembling = Too Cool: Using Multiple BERT Models Together for Various COVID-19 Tweet Identification Tasks
Leung Wai Liu
|
Akshat Gupta
|
Saheed Obitayo
|
Xiaomo Liu
|
Sameena Shah
Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task
This paper presents my submission for Tasks 1 and 2 for the Social Media Mining of Health (SMM4H) 2022 Shared Tasks competition. I first describe the background behind each of these tasks, followed by the descriptions of the various subtasks of Tasks 1 and 2, then present the methodology. Through model ensembling, this methodology was able to achieve higher results than the mean and median of the competition for the classification tasks.
2017
pdf
bib
abs
funSentiment at SemEval-2017 Task 4: Topic-Based Message Sentiment Classification by Exploiting Word Embeddings, Text Features and Target Contexts
Quanzhi Li
|
Armineh Nourbakhsh
|
Xiaomo Liu
|
Rui Fang
|
Sameena Shah
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)
This paper describes the approach we used for SemEval-2017 Task 4: Sentiment Analysis in Twitter. Topic-based (target-dependent) sentiment analysis has become attractive and been used in some applications recently, but it is still a challenging research task. In our approach, we take the left and right context of a target into consideration when generating polarity classification features. We use two types of word embeddings in our classifiers: the general word embeddings learned from 200 million tweets, and sentiment-specific word embeddings learned from 10 million tweets using distance supervision. We also incorporate a text feature model in our algorithm. This model produces features based on text negation, tf.idf weighting scheme, and a Rocchio text classification method. We participated in four subtasks (B, C, D & E for English), all of which are about topic-based message polarity classification. Our team is ranked #6 in subtask B, #3 by MAEu and #9 by MAEm in subtask C, #3 using RAE and #6 using KLD in subtask D, and #3 in subtask E.
pdf
bib
abs
funSentiment at SemEval-2017 Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs Using Word Vectors Built from StockTwits and Twitter
Quanzhi Li
|
Sameena Shah
|
Armineh Nourbakhsh
|
Rui Fang
|
Xiaomo Liu
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)
This paper describes the approach we used for SemEval-2017 Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs. We use three types of word embeddings in our algorithm: word embeddings learned from 200 million tweets, sentiment-specific word embeddings learned from 10 million tweets using distance supervision, and word embeddings learned from 20 million StockTwits messages. In our approach, we also take the left and right context of the target company into consideration when generating polarity prediction features. All the features generated from different word embeddings and contexts are integrated together to train our algorithm
2016
pdf
bib
Witness Identification in Twitter
Rui Fang
|
Armineh Nourbakhsh
|
Xiaomo Liu
|
Sameena Shah
|
Quanzhi Li
Proceedings of the Fourth International Workshop on Natural Language Processing for Social Media