2024
pdf
bib
abs
TreeForm: End-to-end Annotation and Evaluation for Form Document Parsing
Ran Zmigrod
|
Zhiqiang Ma
|
Armineh Nourbakhsh
|
Sameena Shah
Proceedings of The 18th Linguistic Annotation Workshop (LAW-XVIII)
Visually Rich Form Understanding (VRFU) poses a complex research problemdue to the documents’ highly structured nature and yet highly variable style and content. Current annotation schemes decompose form understanding and omit key hierarchical structure, making development and evaluation of end-to-end models difficult. In this paper, we propose a novel F1 metric to evaluate form parsers and describe a new content-agnostic, tree-based annotation scheme for VRFU: TreeForm. We provide methods to convert previous annotation schemes into TreeForm structures and evaluate TreeForm predictions using a modified version of the normalized tree-edit distance. We present initial baselines for our end-to-end performance metric and the TreeForm edit distance, averaged over the FUNSD and XFUND datasets, of 61.5 and 26.4 respectively. We hope that TreeForm encourages deeper research in annotating, modeling, and evaluating the complexities of form-like documents.
pdf
bib
Proceedings of the Joint Workshop of the 7th Financial Technology and Natural Language Processing, the 5th Knowledge Discovery from Unstructured Data in Financial Services, and the 4th Workshop on Economics and Natural Language Processing
Chung-Chi Chen
|
Xiaomo Liu
|
Udo Hahn
|
Armineh Nourbakhsh
|
Zhiqiang Ma
|
Charese Smiley
|
Veronique Hoste
|
Sanjiv Ranjan Das
|
Manling Li
|
Mohammad Ghassemi
|
Hen-Hsen Huang
|
Hiroya Takamura
|
Hsin-Hsi Chen
Proceedings of the Joint Workshop of the 7th Financial Technology and Natural Language Processing, the 5th Knowledge Discovery from Unstructured Data in Financial Services, and the 4th Workshop on Economics and Natural Language Processing
pdf
bib
Can GPT models be Financial Analysts? An Evaluation of ChatGPT and GPT-4 on mock CFA Exams
Ethan Callanan
|
Amarachi Mbakwe
|
Antony Papadimitriou
|
Yulong Pei
|
Mathieu Sibue
|
Xiaodan Zhu
|
Zhiqiang Ma
|
Xiaomo Liu
|
Sameena Shah
Proceedings of the Eighth Financial Technology and Natural Language Processing and the 1st Agent AI for Scenario Planning
pdf
bib
abs
DocLLM: A Layout-Aware Generative Language Model for Multimodal Document Understanding
Dongsheng Wang
|
Natraj Raman
|
Mathieu Sibue
|
Zhiqiang Ma
|
Petr Babkin
|
Simerjot Kaur
|
Yulong Pei
|
Armineh Nourbakhsh
|
Xiaomo Liu
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Enterprise documents such as forms, receipts, reports, and other such records, often carry rich semantics at the intersection of textual and spatial modalities. The visual cues offered by their complex layouts play a crucial role in comprehending these documents effectively. In this paper, we present DocLLM, a lightweight extension to traditional large language models (LLMs) for reasoning over visual documents, taking into account both textual semantics and spatial layout. Our model differs from existing multimodal LLMs by avoiding expensive image encoders and focuses exclusively on bounding box information to incorporate the spatial layout structure. Specifically, the cross-alignment between text and spatial modalities is captured by decomposing the attention mechanism in classical transformers to a set of disentangled matrices. Furthermore, we devise a pre-training objective that learns to infill text segments. This approach allows us to address irregular layouts and heterogeneous content frequently encountered in visual documents. The pre-trained model is fine-tuned using a large-scale instruction dataset, covering four core document intelligence tasks. We demonstrate that our solution outperforms SotA LLMs on 14 out of 16 datasets across all tasks, and generalizes well to 4 out of 5 previously unseen datasets.
2023
pdf
bib
abs
Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? A Study on Several Typical Tasks
Xianzhi Li
|
Samuel Chan
|
Xiaodan Zhu
|
Yulong Pei
|
Zhiqiang Ma
|
Xiaomo Liu
|
Sameena Shah
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track
The most recent large language models (LLMs) such as ChatGPT and GPT-4 have shown exceptional capabilities of generalist models, achieving state-of-the-art performance on a wide range of NLP tasks with little or no adaptation. How effective are such models in the finance domain? Understanding this basic question would have a significant impact on many downstream financial analytical tasks. In this paper, we conduct empirical studies and provide experimental evidences of their performance on a wide variety of financial text analytical problems, using eight benchmark datasets from five categories of tasks. We report both the strengths and limitations of the current models by comparing them to the state-of-the-art fine-tuned approaches and the recently released domain-specific pretrained models. We hope our study can help to understand the capability of the existing models in the financial domain and facilitate further improvements.
2022
pdf
bib
abs
ConvFinQA: Exploring the Chain of Numerical Reasoning in Conversational Finance Question Answering
Zhiyu Chen
|
Shiyang Li
|
Charese Smiley
|
Zhiqiang Ma
|
Sameena Shah
|
William Yang Wang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
With the recent advance in large pre-trained language models, researchers have achieved record performances in NLP tasks that mostly focus on language pattern matching. The community is experiencing the shift of the challenge from how to model language to the imitation of complex reasoning abilities like human beings. In this work, we investigate the application domain of finance that involves real-world, complex numerical reasoning. We propose a new large-scale dataset, ConvFinQA, aiming to study the chain of numerical reasoning in conversational question answering. Our dataset poses great challenge in modeling long-range, complex numerical reasoning paths in real-world conversations. We conduct comprehensive experiments and analyses with both the neural symbolic methods and the prompting-based methods, to provide insights into the reasoning mechanisms of these two divisions. We believe our new dataset should serve as a valuable resource to push forward the exploration of real-world, complex reasoning tasks as the next research focus. Our dataset and code is publicly available at https://github.com/czyssrs/ConvFinQA.
pdf
bib
abs
面向 Transformer 模型的蒙古语语音识别词特征编码方法(Researching of the Mongolian word encoding method based on Transformer Mongolian speech recognition)
Xiaoxu Zhang (张晓旭)
|
Zhiqiang Ma (马志强)
|
Zhiqiang Liu (刘志强)
|
Caijilahu Bao (宝财吉拉呼)
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“针对 Transformer 模型在蒙古语语音识别任务中无法学习到带有控制符的蒙古语词和语音之间的对应关系,造成模型对蒙古语的不适应问题。提出一种面向 Transformer 模型的蒙古语词编码方法,方法使用蒙古语字母特征与词特征进行混合编码,通过结合蒙古语字母信息使 Transformer 模型能够区分带有控制符的蒙古语词,学习到蒙古语词与语音之间的对应关系。在 IMUT-MC 数据集上,构建 Transformer 模型并进行了词特征编码方法的消融实验和对比实验。消融实验结果表明,词特征编码方法在 HWER、WER、SER 上分别降低了 23.4%、6.9%、2.6%;对比实验结果表明,词特征编码方法领先于所有方法,HWER 和 WER 分别达到 11.8%、19.8%。”
pdf
bib
abs
基于注意力的蒙古语说话人特征提取方法(Attention based Mongolian Speaker Feature Extraction)
Fangyuan Zhu (朱方圆)
|
Zhiqiang Ma (马志强)
|
Zhiqiang Liu (刘志强)
|
Caijilahu Bao (宝财吉拉呼)
|
Hongbin Wang (王洪彬)
Proceedings of the 21st Chinese National Conference on Computational Linguistics
“说话人特征提取模型提取到的说话人特征之间区分性低,使得蒙古语声学模型无法学习到区分性信息,导致模型无法适应不同说话人。提出一种基于注意力的说话人自适应方法,方法引入神经图灵机进行自适应,增加记忆模块存放说话人特征,采用注意力机制计算记忆模块中说话人特征与当前语音说话人特征的相似权重矩阵,通过权重矩阵重新组合成说话人特征s-vector,进而提高说话人特征之间的区分性。在IMUT-MCT数据集上,进行说话人特征提取方法的消融实验、模型自适应实验和案例分析。实验结果表明,对比不同说话人特征s-vector、i-vector与d-vector,s-vector比其他两种方法的SER和WER分别降低4.96%、1.08%;在不同的蒙古语声学模型上进行比较,提出的方法相对于基线均有性能提升。”
2018
pdf
bib
abs
The USTC-NEL Speech Translation system at IWSLT 2018
Dan Liu
|
Junhua Liu
|
Wu Guo
|
Shifu Xiong
|
Zhiqiang Ma
|
Rui Song
|
Chongliang Wu
|
Quan Liu
Proceedings of the 15th International Conference on Spoken Language Translation
This paper describes the USTC-NEL (short for ”National Engineering Laboratory for Speech and Language Information Processing University of science and technology of china”) system to the speech translation task of the IWSLT Evaluation 2018. The system is a conventional pipeline system which contains 3 modules: speech recognition, post-processing and machine translation. We train a group of hybrid-HMM models for our speech recognition, and for machine translation we train transformer based neural machine translation models with speech recognition output style text as input. Experiments conducted on the IWSLT 2018 task indicate that, compared to baseline system from KIT, our system achieved 14.9 BLEU improvement.