2022
pdf
bib
abs
DocFin: Multimodal Financial Prediction and Bias Mitigation using Semi-structured Documents
Puneet Mathur
|
Mihir Goyal
|
Ramit Sawhney
|
Ritik Mathur
|
Jochen Leidner
|
Franck Dernoncourt
|
Dinesh Manocha
Findings of the Association for Computational Linguistics: EMNLP 2022
Financial prediction is complex due to the stochastic nature of the stock market. Semi-structured financial documents present comprehensive financial data in tabular formats, such as earnings, profit-loss statements, and balance sheets, and can often contain rich technical analysis along with a textual discussion of corporate history, and management analysis, compliance, and risks. Existing research focuses on the textual and audio modalities of financial disclosures from company conference calls to forecast stock volatility and price movement, but ignores the rich tabular data available in financial reports. Moreover, the economic realm is still plagued with a severe under-representation of various communities spanning diverse demographics, gender, and native speakers. In this work, we show that combining tabular data from financial semi-structured documents with text transcripts and audio recordings not only improves stock volatility and price movement prediction by 5-12% but also reduces gender bias caused due to audio-based neural networks by over 30%.
2018
pdf
bib
abs
attr2vec: Jointly Learning Word and Contextual Attribute Embeddings with Factorization Machines
Fabio Petroni
|
Vassilis Plachouras
|
Timothy Nugent
|
Jochen L. Leidner
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)
The widespread use of word embeddings is associated with the recent successes of many natural language processing (NLP) systems. The key approach of popular models such as word2vec and GloVe is to learn dense vector representations from the context of words. More recently, other approaches have been proposed that incorporate different types of contextual information, including topics, dependency relations, n-grams, and sentiment. However, these models typically integrate only limited additional contextual information, and often in ad hoc ways. In this work, we introduce attr2vec, a novel framework for jointly learning embeddings for words and contextual attributes based on factorization machines. We perform experiments with different types of contextual information. Our experimental results on a text classification task demonstrate that using attr2vec to jointly learn embeddings for words and Part-of-Speech (POS) tags improves results compared to learning the embeddings independently. Moreover, we use attr2vec to train dependency-based embeddings and we show that they exhibit higher similarity between functionally related words compared to traditional approaches.
pdf
bib
abs
A Comparison of Two Paraphrase Models for Taxonomy Augmentation
Vassilis Plachouras
|
Fabio Petroni
|
Timothy Nugent
|
Jochen L. Leidner
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)
Taxonomies are often used to look up the concepts they contain in text documents (for instance, to classify a document). The more comprehensive the taxonomy, the higher recall the application has that uses the taxonomy. In this paper, we explore automatic taxonomy augmentation with paraphrases. We compare two state-of-the-art paraphrase models based on Moses, a statistical Machine Translation system, and a sequence-to-sequence neural network, trained on a paraphrase datasets with respect to their abilities to add novel nodes to an existing taxonomy from the risk domain. We conduct component-based and task-based evaluations. Our results show that paraphrasing is a viable method to enrich a taxonomy with more terms, and that Moses consistently outperforms the sequence-to-sequence neural model. To the best of our knowledge, this is the first approach to augment taxonomies with paraphrases.
2017
pdf
bib
abs
Ethical by Design: Ethics Best Practices for Natural Language Processing
Jochen L. Leidner
|
Vassilis Plachouras
Proceedings of the First ACL Workshop on Ethics in Natural Language Processing
Natural language processing (NLP) systems analyze and/or generate human language, typically on users’ behalf. One natural and necessary question that needs to be addressed in this context, both in research projects and in production settings, is the question how ethical the work is, both regarding the process and its outcome. Towards this end, we articulate a set of issues, propose a set of best practices, notably a process featuring an ethics review board, and sketch and how they could be meaningfully applied. Our main argument is that ethical outcomes ought to be achieved by design, i.e. by following a process aligned by ethical values. We also offer some response options for those facing ethics issues. While a number of previous works exist that discuss ethical issues, in particular around big data and machine learning, to the authors’ knowledge this is the first account of NLP and ethics from the perspective of a principled process.
pdf
bib
abs
Say the Right Thing Right: Ethics Issues in Natural Language Generation Systems
Charese Smiley
|
Frank Schilder
|
Vassilis Plachouras
|
Jochen L. Leidner
Proceedings of the First ACL Workshop on Ethics in Natural Language Processing
We discuss the ethical implications of Natural Language Generation systems. We use one particular system as a case study to identify and classify issues, and we provide an ethics checklist, in the hope that future system designers may benefit from conducting their own ethics reviews based on our checklist.
2016
pdf
bib
When to Plummet and When to Soar: Corpus Based Verb Selection for Natural Language Generation
Charese Smiley
|
Vassilis Plachouras
|
Frank Schilder
|
Hiroko Bretz
|
Jochen Leidner
|
Dezhao Song
Proceedings of the 9th International Natural Language Generation conference
2011
pdf
bib
Book Review: Handbook of Natural Language Processing (second edition) edited by Nitin Indurkhya and Fred J. Damerau
Jochen L. Leidner
Computational Linguistics, Volume 37, Issue 2 - June 2011
2010
pdf
bib
Hunting for the Black Swan: Risk Mining from Text
Jochen Leidner
|
Frank Schilder
Proceedings of the ACL 2010 System Demonstrations
2008
pdf
bib
abs
Cost-Sensitive Learning in Answer Extraction
Michael Wiegand
|
Jochen L. Leidner
|
Dietrich Klakow
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
One problem of data-driven answer extraction in open-domain factoid question answering is that the class distribution of labeled training data is fairly imbalanced. In an ordinary training set, there are far more incorrect answers than correct answers. The class-imbalance is, thus, inherent to the classification task. It has a deteriorating effect on the performance of classifiers trained by standard machine learning algorithms. They usually have a heavy bias towards the majority class, i.e. the class which occurs most often in the training set. In this paper, we propose a method to tackle class imbalance by applying some form of cost-sensitive learning which is preferable to sampling. We present a simple but effective way of estimating the misclassification costs on the basis of class distribution. This approach offers three benefits. Firstly, it maintains the distribution of the classes of the labeled training data. Secondly, this form of meta-learning can be applied to a wide range of common learning algorithms. Thirdly, this approach can be easily implemented with the help of state-of-the-art machine learning software.
2006
pdf
bib
abs
Building an Evaluation Corpus for German Question Answering by Harvesting Wikipedia
Irene Cramer
|
Jochen L. Leidner
|
Dietrich Klakow
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
The growing interest in open-domain question answering is limited by the lack of evaluation and training resources. To overcome this resource bottleneck for German, we propose a novel methodology to acquire new question-answer pairs for system evaluation that relies on volunteer collaboration over the Internet. Utilizing Wikipedia, a popular free online encyclopedia available in several languages, we show that the data acquisition problem can be cast as a Web experiment. We present a Web-based annotation tool and carry out a distributed data collection experiment. The data gathered from the mostly anonymous contributors is compared to a similar dataset produced in-house by domain experts on the one hand, and the German questions from the from the CLEF QA 2004 effort on the other hand. Our analysis of the datasets suggests that using our novel method a medium-scale evaluation resource can be built at very small cost in a short period of time. The technique and software developed here is readily applicable to other languages where free online encyclopedias are available, and our resulting corpus is likewise freely available.
2003
pdf
bib
Grounding spatial named entities for information extraction and question answering
Jochen L. Leidner
|
Gail Sinclair
|
Bonnie Webber
Proceedings of the HLT-NAACL 2003 Workshop on Analysis of Geographic References
pdf
bib
Current Issues in Software Engineering for Natural Language Processing
Jochen Leidner
Proceedings of the HLT-NAACL 2003 Workshop on Software Engineering and Architecture of Language Technology Systems (SEALTS)
pdf
bib
Automatic Multi-Layer Corpus Annotation for Evaluation Question Answering Methods: CBC4Kids
Jochen L. Leidner
|
Tiphaine Dalmas
|
Bonnie Webber
|
Johan Bos
|
Claire Grover
Proceedings of 4th International Workshop on Linguistically Interpreted Corpora (LINC-03) at EACL 2003