Tin Lok James Ng
2020
Generating Plausible Counterfactual Explanations for Deep Transformers in Financial Text Classification
Linyi Yang
|
Eoin Kenny
|
Tin Lok James Ng
|
Yi Yang
|
Barry Smyth
|
Ruihai Dong
Proceedings of the 28th International Conference on Computational Linguistics
Corporate mergers and acquisitions (M&A) account for billions of dollars of investment globally every year and offer an interesting and challenging domain for artificial intelligence. However, in these highly sensitive domains, it is crucial to not only have a highly robust/accurate model, but be able to generate useful explanations to garner a user’s trust in the automated system. Regrettably, the recent research regarding eXplainable AI (XAI) in financial text classification has received little to no attention, and many current methods for generating textual-based explanations result in highly implausible explanations, which damage a user’s trust in the system. To address these issues, this paper proposes a novel methodology for producing plausible counterfactual explanations, whilst exploring the regularization benefits of adversarial training on language models in the domain of FinTech. Exhaustive quantitative experiments demonstrate that not only does this approach improve the model accuracy when compared to the current state-of-the-art and human performance, but it also generates counterfactual explanations which are significantly more plausible based on human trials.
2019
Leveraging BERT to Improve the FEARS Index for Stock Forecasting
Linyi Yang
|
Ruihai Dong
|
Tin Lok James Ng
|
Yang Xu
Proceedings of the First Workshop on Financial Technology and Natural Language Processing
Search
Co-authors
- Linyi Yang 2
- Ruihai Dong 2
- Eoin Kenny 1
- Yi Yang 1
- Barry Smyth 1
- show all...
- Yang Xu 1