Paul Rayson


2023

pdf bib
FinAraT5: A text to text model for financial Arabic text understanding and generation
Nadhem Zmandar | Mo El-Haj | Paul Rayson
Proceedings of the 4th Conference on Language, Data and Knowledge

pdf bib
Open-Source Thesaurus Development for Under-Resourced Languages: a Welsh Case Study
Nouran Khallaf | Elin Arfon | Mo El-Haj | Jonathan Morris | Dawn Knight | Paul Rayson | Tymaa Hasanain Hammouda | Mustafa Jarrar
Proceedings of the 4th Conference on Language, Data and Knowledge

2022

pdf bib
CoFiF Plus: A French Financial Narrative Summarisation Corpus
Nadhem Zmandar | Tobias Daudert | Sina Ahmadi | Mahmoud El-Haj | Paul Rayson
Proceedings of the Thirteenth Language Resources and Evaluation Conference

Natural Language Processing is increasingly being applied in the finance and business industry to analyse the text of many different types of financial documents. Given the increasing growth of firms around the world, the volume of financial disclosures and financial texts in different languages and forms is increasing sharply and therefore the study of language technology methods that automatically summarise content has grown rapidly into a major research area. Corpora for financial narrative summarisation exists in English, but there is a significant lack of financial text resources in the French language. To remedy this, we present CoFiF Plus, the first French financial narrative summarisation dataset providing a comprehensive set of financial text written in French. The dataset has been extracted from French financial reports published in PDF file format. It is composed of 1,703 reports from the most capitalised companies in France (Euronext Paris) covering a time frame from 1995 to 2021. This paper describes the collection, annotation and validation of the financial reports and their summaries. It also describes the dataset and gives the results of some baseline summarisers. Our datasets will be openly available upon the acceptance of the paper.

pdf bib
IgboBERT Models: Building and Training Transformer Models for the Igbo Language
Chiamaka Chukwuneke | Ignatius Ezeani | Paul Rayson | Mahmoud El-Haj
Proceedings of the Thirteenth Language Resources and Evaluation Conference

This work presents a standard Igbo named entity recognition (IgboNER) dataset as well as the results from training and fine-tuning state-of-the-art transformer IgboNER models. We discuss the process of our dataset creation - data collection and annotation and quality checking. We also present experimental processes involved in building an IgboBERT language model from scratch as well as fine-tuning it along with other non-Igbo pre-trained models for the downstream IgboNER task. Our results show that, although the IgboNER task benefited hugely from fine-tuning large transformer model, fine-tuning a transformer model built from scratch with comparatively little Igbo text data seems to yield quite decent results for the IgboNER task. This work will contribute immensely to IgboNLP in particular as well as the wider African and low-resource NLP efforts Keywords: Igbo, named entity recognition, BERT models, under-resourced, dataset

pdf bib
AraSAS: The Open Source Arabic Semantic Tagger
Mahmoud El-Haj | Elvis de Souza | Nouran Khallaf | Paul Rayson | Nizar Habash
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur'an QA and Fine-Grained Hate Speech Detection

This paper presents (AraSAS) the first open-source Arabic semantic analysis tagging system. AraSAS is a software framework that provides full semantic tagging of text written in Arabic. AraSAS is based on the UCREL Semantic Analysis System (USAS) which was first developed to semantically tag English text. Similarly to USAS, AraSAS uses a hierarchical semantic tag set that contains 21 major discourse fields and 232 fine-grained semantic field tags. The paper describes the creation, validation and evaluation of AraSAS. In addition, we demonstrate a first case study to illustrate the affordances of applying USAS and AraSAS semantic taggers on the Zayed University Arabic-English Bilingual Undergraduate Corpus (ZAEBUC) (Palfreyman and Habash, 2022), where we show and compare the coverage of the two semantic taggers through running them on Arabic and English essays on different topics. The analysis expands to compare the taggers when run on texts in Arabic and English written by the same writer and texts written by male and by female students. Variables for comparison include frequency of use of particular semantic sub-domains, as well as the diversity of semantic elements within a text.

pdf bib
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022
Mahmoud El-Haj | Paul Rayson | Nadhem Zmandar
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022

pdf bib
The Financial Narrative Summarisation Shared Task (FNS 2022)
Mahmoud El-Haj | Nadhem Zmandar | Paul Rayson | Ahmed AbuRa’ed | Marina Litvak | Nikiforos Pittaras | George Giannakopoulos | Aris Kosmopoulos | Blanca Carbajo-Coronado | Antonio Moreno-Sandoval
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022

This paper presents the results and findings of the Financial Narrative Summarisation Shared Task on summarising UK, Greek and Spanish annual reports. The shared task was organised as part of the Financial Narrative Processing 2022 Workshop (FNP 2022 Workshop). The Financial Narrative summarisation Shared Task (FNS-2022) has been running since 2020 as part of the Financial Narrative Processing (FNP) workshop series (El-Haj et al., 2022; El-Haj et al., 2021; El-Haj et al., 2020b; El-Haj et al., 2019c; El-Haj et al., 2018). The shared task included one main task which is the use of either abstractive or extractive automatic summarisers to summarise long documents in terms of UK, Greek and Spanish financial annual reports. This shared task is the third to target financial documents. The data for the shared task was created and collected from publicly available annual reports published by firms listed on the Stock Exchanges of UK, Greece and Spain. A total number of 14 systems from 7 different teams participated in the shared task.

2021

pdf bib
Proceedings of the 3rd Financial Narrative Processing Workshop
Mahmoud El-Haj | Paul Rayson | Nadhem Zmandar
Proceedings of the 3rd Financial Narrative Processing Workshop

pdf bib
Joint abstractive and extractive method for long financial document summarization
Nadhem Zmandar | Abhishek Singh | Mahmoud El-Haj | Paul Rayson
Proceedings of the 3rd Financial Narrative Processing Workshop

pdf bib
The Financial Narrative Summarisation Shared Task FNS 2021
Nadhem Zmandar | Mahmoud El-Haj | Paul Rayson | Ahmed Abura’Ed | Marina Litvak | Geroge Giannakopoulos | Nikiforos Pittaras
Proceedings of the 3rd Financial Narrative Processing Workshop

pdf bib
MasakhaNER: Named Entity Recognition for African Languages
David Ifeoluwa Adelani | Jade Abbott | Graham Neubig | Daniel D’souza | Julia Kreutzer | Constantine Lignos | Chester Palen-Michel | Happy Buzaaba | Shruti Rijhwani | Sebastian Ruder | Stephen Mayhew | Israel Abebe Azime | Shamsuddeen H. Muhammad | Chris Chinenye Emezue | Joyce Nakatumba-Nabende | Perez Ogayo | Aremu Anuoluwapo | Catherine Gitau | Derguene Mbaye | Jesujoba Alabi | Seid Muhie Yimam | Tajuddeen Rabiu Gwadabe | Ignatius Ezeani | Rubungo Andre Niyongabo | Jonathan Mukiibi | Verrah Otiende | Iroro Orife | Davis David | Samba Ngom | Tosin Adewumi | Paul Rayson | Mofetoluwa Adeyemi | Gerald Muriuki | Emmanuel Anebi | Chiamaka Chukwuneke | Nkiruka Odu | Eric Peter Wairagala | Samuel Oyerinde | Clemencia Siro | Tobius Saul Bateesa | Temilola Oloyede | Yvonne Wambui | Victor Akinode | Deborah Nabagereka | Maurice Katusiime | Ayodele Awokoya | Mouhamadane MBOUP | Dibora Gebreyohannes | Henok Tilaye | Kelechi Nwaike | Degaga Wolde | Abdoulaye Faye | Blessing Sibanda | Orevaoghene Ahia | Bonaventure F. P. Dossou | Kelechi Ogueji | Thierno Ibrahima DIOP | Abdoulaye Diallo | Adewale Akinfaderin | Tendai Marengereke | Salomey Osei
Transactions of the Association for Computational Linguistics, Volume 9

We take a step towards addressing the under- representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of state- of-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP.1

pdf bib
Understanding who uses Reddit: Profiling individuals with a self-reported bipolar disorder diagnosis
Glorianna Jagfeld | Fiona Lobban | Paul Rayson | Steven Jones
Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access

Recently, research on mental health conditions using public online data, including Reddit, has surged in NLP and health research but has not reported user characteristics, which are important to judge generalisability of findings. This paper shows how existing NLP methods can yield information on clinical, demographic, and identity characteristics of almost 20K Reddit users who self-report a bipolar disorder diagnosis. This population consists of slightly more feminine- than masculine-gendered mainly young or middle-aged US-based adults who often report additional mental health diagnoses, which is compared with general Reddit statistics and epidemiological studies. Additionally, this paper carefully evaluates all methods and discusses ethical issues.

2020

pdf bib
COVID-19 and Arabic Twitter: How can Arab World Governments and Public Health Organizations Learn from Social Media?
Lama Alsudias | Paul Rayson
Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020

In March 2020, the World Health Organization announced the COVID-19 outbreak as a pandemic. Most previous social media related research has been on English tweets and COVID-19. In this study, we collect approximately 1 million Arabic tweets from the Twitter streaming API related to COVID-19. Focussing on outcomes that we believe will be useful for Public Health Organizations, we analyse them in three different ways: identifying the topics discussed during the period, detecting rumours, and predicting the source of the tweets. We use the k-means algorithm for the first goal with k=5. The topics discussed can be grouped as follows: COVID-19 statistics, prayers for God, COVID-19 locations, advise and education for prevention, and advertising. We sample 2000 tweets and label them manually for false information, correct information, and unrelated. Then, we apply three different machine learning algorithms, Logistic Regression, Support Vector Classification, and Naïve Bayes with two sets of features, word frequency approach and word embeddings. We find that Machine Learning classifiers are able to correctly identify the rumour related tweets with 84% accuracy. We also try to predict the source of the rumour related tweets depending on our previous model which is about classifying tweets into five categories: academic, media, government, health professional, and public. Around (60%) of the rumour related tweets are classified as written by health professionals and academics.

pdf bib
LexiDB: Patterns & Methods for Corpus Linguistic Database Management
Matthew Coole | Paul Rayson | John Mariani
Proceedings of the Twelfth Language Resources and Evaluation Conference

LexiDB is a tool for storing, managing and querying corpus data. In contrast to other database management systems (DBMSs), it is designed specifically for text corpora. It improves on other corpus management systems (CMSs) because data can be added and deleted from corpora on the fly with the ability to add live data to existing corpora. LexiDB sits between these two categories of DBMSs and CMSs, more specialised to language data than a general purpose DBMS but more flexible than a traditional static corpus management system. Previous work has demonstrated the scalability of LexiDB in response to the growing need to be able to scale out for ever growing corpus datasets. Here, we present the patterns and methods developed in LexiDB for storage, retrieval and querying of multi-level annotated corpus data. These techniques are evaluated and compared to an existing CMS (Corpus Workbench CWB - CQP) and indexer (Lucene). We find that LexiDB consistently outperforms existing tools for corpus queries. This is particularly apparent with large corpora and when handling queries with large result sets

pdf bib
Developing an Arabic Infectious Disease Ontology to Include Non-Standard Terminology
Lama Alsudias | Paul Rayson
Proceedings of the Twelfth Language Resources and Evaluation Conference

Building ontologies is a crucial part of the semantic web endeavour. In recent years, research interest has grown rapidly in supporting languages such as Arabic in NLP in general but there has been very little research on medical ontologies for Arabic. We present a new Arabic ontology in the infectious disease domain to support various important applications including the monitoring of infectious disease spread via social media. This ontology meaningfully integrates the scientific vocabularies of infectious diseases with their informal equivalents. We use ontology learning strategies with manual checking to build the ontology. We applied three statistical methods for term extraction from selected Arabic infectious diseases articles: TF-IDF, C-value, and YAKE. We also conducted a study, by consulting around 100 individuals, to discover the informal terms related to infectious diseases in Arabic. In future work, we will automatically extract the relations for infectious disease concepts but for now these are manually created. We report two complementary experiments to evaluate the ontology. First, a quantitative evaluation of the term extraction results and an additional qualitative evaluation by a domain expert.

pdf bib
Infrastructure for Semantic Annotation in the Genomics Domain
Mahmoud El-Haj | Nathan Rutherford | Matthew Coole | Ignatius Ezeani | Sheryl Prentice | Nancy Ide | Jo Knight | Scott Piao | John Mariani | Paul Rayson | Keith Suderman
Proceedings of the Twelfth Language Resources and Evaluation Conference

We describe a novel super-infrastructure for biomedical text mining which incorporates an end-to-end pipeline for the collection, annotation, storage, retrieval and analysis of biomedical and life sciences literature, combining NLP and corpus linguistics methods. The infrastructure permits extreme-scale research on the open access PubMed Central archive. It combines an updatable Gene Ontology Semantic Tagger (GOST) for entity identification and semantic markup in the literature, with a NLP pipeline scheduler (Buster) to collect and process the corpus, and a bespoke columnar corpus database (LexiDB) for indexing. The corpus database is distributed to permit fast indexing, and provides a simple web front-end with corpus linguistics methods for sub-corpus comparison and retrieval. GOST is also connected as a service in the Language Application (LAPPS) Grid, in which context it is interoperable with other NLP tools and data in the Grid and can be combined with them in more complex workflows. In a literature based discovery setting, we have created an annotated corpus of 9,776 papers with 5,481,543 words.

pdf bib
Unfinished Business: Construction and Maintenance of a Semantically Tagged Historical Parliamentary Corpus, UK Hansard from 1803 to the present day
Matthew Coole | Paul Rayson | John Mariani
Proceedings of the Second ParlaCLARIN Workshop

Creating, curating and maintaining modern political corpora is becoming an ever more involved task. As interest from various social bodies and the general public in political discourse grows so too does the need to enrich such datasets with metadata and linguistic annotations. Beyond this, such corpora must be easy to browse and search for linguists, social scientists, digital humanists and the general public. We present our efforts to compile a linguistically annotated and semantically tagged version of the Hansard corpus from 1803 right up to the present day. This involves combining multiple sources of documents and transcripts. We describe our toolchain for tagging; using several existing tools that provide tokenisation, part-of-speech tagging and semantic annotations. We also provide an overview of our bespoke web-based search interface built on LexiDB. In conclusion, we examine the completed corpus by looking at four case studies including semantic categories made available by our toolchain.

2019

pdf bib
FIESTA: Fast IdEntification of State-of-The-Art models using adaptive bandit algorithms
Henry Moss | Andrew Moore | David Leslie | Paul Rayson
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

We present FIESTA, a model selection approach that significantly reduces the computational resources required to reliably identify state-of-the-art performance from large collections of candidate models. Despite being known to produce unreliable comparisons, it is still common practice to compare model evaluations based on single choices of random seeds. We show that reliable model selection also requires evaluations based on multiple train-test splits (contrary to common practice in many shared tasks). Using bandit theory from the statistics literature, we are able to adaptively determine appropriate numbers of data splits and random seeds used to evaluate each model, focusing computational resources on the evaluation of promising models whilst avoiding wasting evaluations on models with lower performance. Furthermore, our user-friendly Python implementation produces confidence guarantees of correctly selecting the optimal model. We evaluate our algorithms by selecting between 8 target-dependent sentiment analysis methods using dramatically fewer model evaluations than current model selection approaches.

pdf bib
Leveraging Pre-Trained Embeddings for Welsh Taggers
Ignatius Ezeani | Scott Piao | Steven Neale | Paul Rayson | Dawn Knight
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)

While the application of word embedding models to downstream Natural Language Processing (NLP) tasks has been shown to be successful, the benefits for low-resource languages is somewhat limited due to lack of adequate data for training the models. However, NLP research efforts for low-resource languages have focused on constantly seeking ways to harness pre-trained models to improve the performance of NLP systems built to process these languages without the need to re-invent the wheel. One such language is Welsh and therefore, in this paper, we present the results of our experiments on learning a simple multi-task neural network model for part-of-speech and semantic tagging for Welsh using a pre-trained embedding model from FastText. Our model’s performance was compared with those of the existing rule-based stand-alone taggers for part-of-speech and semantic taggers. Despite its simplicity and capacity to perform both tasks simultaneously, our tagger compared very well with the existing taggers.

pdf bib
Proceedings of the 3rd Workshop on Arabic Corpus Linguistics
Mahmoud El-Haj | Paul Rayson | Eric Atwell | Lama Alsudias
Proceedings of the 3rd Workshop on Arabic Corpus Linguistics

pdf bib
Classifying Information Sources in Arabic Twitter to Support Online Monitoring of Infectious Diseases
Lama Alsudias | Paul Rayson
Proceedings of the 3rd Workshop on Arabic Corpus Linguistics

pdf bib
Proceedings of the Second Financial Narrative Processing Workshop (FNP 2019)
Mahmoud El-Haj | Paul Rayson | Steven Young | Houda Bouamor | Sira Ferradans
Proceedings of the Second Financial Narrative Processing Workshop (FNP 2019)

2018

pdf bib
Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis
Andrew Moore | Paul Rayson
Proceedings of the 27th International Conference on Computational Linguistics

Lack of repeatability and generalisability are two significant threats to continuing scientific development in Natural Language Processing. Language models and learning methods are so complex that scientific conference papers no longer contain enough space for the technical depth required for replication or reproduction. Taking Target Dependent Sentiment Analysis as a case study, we show how recent work in the field has not consistently released code, or described settings for learning methods in enough detail, and lacks comparability and generalisability in train, test or validation data. To investigate generalisability and to enable state of the art comparative evaluations, we carry out the first reproduction studies of three groups of complementary methods and perform the first large-scale mass evaluation on six different English datasets. Reflecting on our experiences, we recommend that future replication or reproduction experiments should always consider a variety of datasets alongside documenting and releasing their methods and published code in order to minimise the barriers to both repeatability and generalisability. We have released our code with a model zoo on GitHub with Jupyter Notebooks to aid understanding and full documentation, and we recommend that others do the same with their papers at submission time through an anonymised GitHub account.

pdf bib
Using J-K-fold Cross Validation To Reduce Variance When Tuning NLP Models
Henry Moss | David Leslie | Paul Rayson
Proceedings of the 27th International Conference on Computational Linguistics

K-fold cross validation (CV) is a popular method for estimating the true performance of machine learning models, allowing model selection and parameter tuning. However, the very process of CV requires random partitioning of the data and so our performance estimates are in fact stochastic, with variability that can be substantial for natural language processing tasks. We demonstrate that these unstable estimates cannot be relied upon for effective parameter tuning. The resulting tuned parameters are highly sensitive to how our data is partitioned, meaning that we often select sub-optimal parameter choices and have serious reproducibility issues. Instead, we propose to use the less variable J-K-fold CV, in which J independent K-fold cross validations are used to assess performance. Our main contributions are extending J-K-fold CV from performance estimation to parameter tuning and investigating how to choose J and K. We argue that variability is more important than bias for effective tuning and so advocate lower choices of K than are typically seen in the NLP literature and instead use the saved computation to increase J. To demonstrate the generality of our recommendations we investigate a wide range of case-studies: sentiment classification (both general and target-specific), part-of-speech tagging and document classification.

pdf bib
Towards a Welsh Semantic Annotation System
Scott Piao | Paul Rayson | Dawn Knight | Gareth Watkins
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Arabic Dialect Identification in the Context of Bivalency and Code-Switching
Mahmoud El-Haj | Paul Rayson | Mariam Aboelezz
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf bib
Profiling Medical Journal Articles Using a Gene Ontology Semantic Tagger
Mahmoud El-Haj | Paul Rayson | Scott Piao | Jo Knight
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2017

pdf bib
Creating and Validating Multilingual Semantic Representations for Six Languages: Expert versus Non-Expert Crowds
Mahmoud El-Haj | Paul Rayson | Scott Piao | Stephen Wattam
Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications

Creating high-quality wide-coverage multilingual semantic lexicons to support knowledge-based approaches is a challenging time-consuming manual task. This has traditionally been performed by linguistic experts: a slow and expensive process. We present an experiment in which we adapt and evaluate crowdsourcing methods employing native speakers to generate a list of coarse-grained senses under a common multilingual semantic taxonomy for sets of words in six languages. 451 non-experts (including 427 Mechanical Turk workers) and 15 expert participants semantically annotated 250 words manually for Arabic, Chinese, English, Italian, Portuguese and Urdu lexicons. In order to avoid erroneous (spam) crowdsourced results, we used a novel task-specific two-phase filtering process where users were asked to identify synonyms in the target language, and remove erroneous senses.

pdf bib
Lancaster A at SemEval-2017 Task 5: Evaluation metrics matter: predicting sentiment from financial news headlines
Andrew Moore | Paul Rayson
Proceedings of the 11th International Workshop on Semantic Evaluation (SemEval-2017)

This paper describes our participation in Task 5 track 2 of SemEval 2017 to predict the sentiment of financial news headlines for a specific company on a continuous scale between -1 and 1. We tackled the problem using a number of approaches, utilising a Support Vector Regression (SVR) and a Bidirectional Long Short-Term Memory (BLSTM). We found an improvement of 4-6% using the LSTM model over the SVR and came fourth in the track. We report a number of different evaluations using a finance specific word embedding model and reflect on the effects of using different evaluation metrics.

2016

pdf bib
OSMAN ― A Novel Arabic Readability Metric
Mahmoud El-Haj | Paul Rayson
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

We present OSMAN (Open Source Metric for Measuring Arabic Narratives) - a novel open source Arabic readability metric and tool. It allows researchers to calculate readability for Arabic text with and without diacritics. OSMAN is a modified version of the conventional readability formulas such as Flesch and Fog. In our work we introduce a novel approach towards counting short, long and stress syllables in Arabic which is essential for judging readability of Arabic narratives. We also introduce an additional factor called “Faseeh” which considers aspects of script usually dropped in informal Arabic writing. To evaluate our methods we used Spearman’s correlation metric to compare text readability for 73,000 parallel sentences from English and Arabic UN documents. The Arabic sentences were written with the absence of diacritics and in order to count the number of syllables we added the diacritics in using an open source tool called Mishkal. The results show that OSMAN readability formula correlates well with the English ones making it a useful tool for researchers and educators working with Arabic text.

pdf bib
Learning Tone and Attribution for Financial Text Mining
Mahmoud El-Haj | Paul Rayson | Steve Young | Andrew Moore | Martin Walker | Thomas Schleicher | Vasiliki Athanasakou
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Attribution bias refers to the tendency of people to attribute successes to their own abilities but failures to external factors. In a business context an internal factor might be the restructuring of the firm and an external factor might be an unfavourable change in exchange or interest rates. In accounting research, the presence of an attribution bias has been demonstrated for the narrative sections of the annual financial reports. Previous studies have applied manual content analysis to this problem but in this paper we present novel work to automate the analysis of attribution bias through using machine learning algorithms. Previous studies have only applied manual content analysis on a small scale to reveal such a bias in the narrative section of annual financial reports. In our work a group of experts in accounting and finance labelled and annotated a list of 32,449 sentences from a random sample of UK Preliminary Earning Announcements (PEAs) to allow us to examine whether sentences in PEAs contain internal or external attribution and which kinds of attributions are linked to positive or negative performance. We wished to examine whether human annotators could agree on coding this difficult task and whether Machine Learning (ML) could be applied reliably to replicate the coding process on a much larger scale. Our best machine learning algorithm correctly classified performance sentences with 70% accuracy and detected tone and attribution in financial PEAs with accuracy of 79%.

pdf bib
UPPC - Urdu Paraphrase Plagiarism Corpus
Muhammad Sharjeel | Paul Rayson | Rao Muhammad Adeel Nawab
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

Paraphrase plagiarism is a significant and widespread problem and research shows that it is hard to detect. Several methods and automatic systems have been proposed to deal with it. However, evaluation and comparison of such solutions is not possible because of the unavailability of benchmark corpora with manual examples of paraphrase plagiarism. To deal with this issue, we present the novel development of a paraphrase plagiarism corpus containing simulated (manually created) examples in the Urdu language - a language widely spoken around the world. This resource is the first of its kind developed for the Urdu language and we believe that it will be a valuable contribution to the evaluation of paraphrase plagiarism detection systems.

pdf bib
Lexical Coverage Evaluation of Large-scale Multilingual Semantic Lexicons for Twelve Languages
Scott Piao | Paul Rayson | Dawn Archer | Francesca Bianchi | Carmen Dayrell | Mahmoud El-Haj | Ricardo-María Jiménez | Dawn Knight | Michal Křen | Laura Löfberg | Rao Muhammad Adeel Nawab | Jawad Shafi | Phoey Lee Teh | Olga Mudraya
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)

The last two decades have seen the development of various semantic lexical resources such as WordNet (Miller, 1995) and the USAS semantic lexicon (Rayson et al., 2004), which have played an important role in the areas of natural language processing and corpus-based studies. Recently, increasing efforts have been devoted to extending the semantic frameworks of existing lexical knowledge resources to cover more languages, such as EuroWordNet and Global WordNet. In this paper, we report on the construction of large-scale multilingual semantic lexicons for twelve languages, which employ the unified Lancaster semantic taxonomy and provide a multilingual lexical knowledge base for the automatic UCREL semantic annotation system (USAS). Our work contributes towards the goal of constructing larger-scale and higher-quality multilingual semantic lexical resources and developing corpus annotation tools based on them. Lexical coverage is an important factor concerning the quality of the lexicons and the performance of the corpus annotation tools, and in this experiment we focus on evaluating the lexical coverage achieved by the multilingual lexicons and semantic annotation tools based on them. Our evaluation shows that some semantic lexicons such as those for Finnish and Italian have achieved lexical coverage of over 90% while others need further expansion.

2015

pdf bib
Development of the Multilingual Semantic Annotation System
Scott Piao | Francesca Bianchi | Carmen Dayrell | Angela D’Egidio | Paul Rayson
Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2014

pdf bib
Detecting Document Structure in a Very Large Corpus of UK Financial Reports
Mahmoud El-Haj | Paul Rayson | Steve Young | Martin Walker
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

In this paper we present the evaluation of our automatic methods for detecting and extracting document structure in annual financial reports. The work presented is part of the Corporate Financial Information Environment (CFIE) project in which we are using Natural Language Processing (NLP) techniques to study the causes and consequences of corporate disclosure and financial reporting outcomes. We aim to uncover the determinants of financial reporting quality and the factors that influence the quality of information disclosed to investors beyond the financial statements. The CFIE consists of the supply of information by firms to investors, and the mediating influences of information intermediaries on the timing, relevance and reliability of information available to investors. It is important to compare and contrast specific elements or sections of each annual financial report across our entire corpus rather than working at the full document level. We show that the values of some metrics e.g. readability will vary across sections, thus improving on previous research research based on full texts.

pdf bib
Experiences with Parallelisation of an Existing NLP Pipeline: Tagging Hansard
Stephen Wattam | Paul Rayson | Marc Alexander | Jean Anderson
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)

This poster describes experiences processing the two-billion-word Hansard corpus using a fairly standard NLP pipeline on a high performance cluster. Herein we report how we were able to parallelise and apply a traditional single-threaded batch-oriented application to a platform that differs greatly from that for which it was originally designed. We start by discussing the tagging toolchain, its specific requirements and properties, and its performance characteristics. This is contrasted with a description of the cluster on which it was to run, and specific limitations are discussed such as the overhead of using SAN-based storage. We then go on to discuss the nature of the Hansard corpus, and describe which properties of this corpus in particular prove challenging for use on the system architecture used. The solution for tagging the corpus is then described, along with performance comparisons against a naive run on commodity hardware. We discuss the gains and benefits of using high-performance machinery rather than relatively cheap commodity hardware. Our poster provides a valuable scenario for large scale NLP pipelines and lessons learnt from the experience.

2013

pdf bib
Using a Keyness Metric for Single and Multi Document Summarisation
Mahmoud El-Haj | Paul Rayson
Proceedings of the MultiLing 2013 Workshop on Multilingual Multi-document Summarization

2012

pdf bib
Document Attrition in Web Corpora: an Exploration
Stephen Wattam | Paul Rayson | Damon Berridge
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)

Increases in the use of web data for corpus-building, coupled with the use of specialist, single-use corpora, make for an increasing reliance on language that changes quickly, affecting the long-term validity of studies based on these methods. This ‘drift' through time affects both users of open-source corpora and those attempting to interpret the results of studies based on web data. The attrition of documents online, also called link rot or document half-life, has been studied many times for the purposes of optimising search engine web crawlers, producing robust and reliable archival systems, and ensuring the integrity of distributed information stores, however, the affect that attrition has upon corpora of varying construction remains largely unknown. This paper presents a preliminary investigation into the differences in attrition rate between corpora selected using different corpus construction methods. It represents the first step in a larger longitudinal analysis, and as such presents URI-based content clues, chosen to relate to studies from other areas. The ultimate goal of this larger study is to produce a detailed enumeration of the primary biases online, and identify sampling strategies which control and minimise unwanted effects of document attrition.

2006

pdf bib
ASSIST: Automated Semantic Assistance for Translators
Serge Sharoff | Bogdan Babych | Paul Rayson | Olga Mudraya | Scott Piao
Demonstrations

pdf bib
Measuring MWE Compositionality Using Semantic Annotation
Scott S.L. Piao | Paul Rayson | Olga Mudraya | Andrew Wilson | Roger Garside
Proceedings of the Workshop on Multiword Expressions: Identifying and Exploiting Underlying Properties

pdf bib
Annotated Web as corpus
Paul Rayson | James Walkerdine | William H. Fletcher | Adam Kilgarriff
Proceedings of the 2nd International Workshop on Web as Corpus

pdf bib
Automatic Extraction of Chinese Multiword Expressions with a Statistical Tool
Scott S.L. Piao | Guangfan Sun | Paul Rayson | Qi Yuan
Proceedings of the Workshop on Multi-word-expressions in a multilingual context

2004

pdf bib
Evaluating Lexical Resources for a Semantic Tagger
Scott S. L. Piao | Paul Rayson | Dawn Archer | Tony McEnery
Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC’04)

Semantic lexical resources play an important part in both linguistic study and natural language engineering. In Lancaster, a large semantic lexical resource has been built over the past 14 years, which provides a knowledge base for the USAS semantic tagger. Capturing semantic lexicological theory and empirical lexical usage information extracted from corpora, the Lancaster semantic lexicon provides a valuable resource for the corpus research and NLP community. In this paper, we evaluate the lexical coverage of the semantic lexicon both in terms of genres and time periods. We conducted the evaluation on test corpora including the BNC sampler, the METER Corpus of law/court journalism reports and some corpora of Newsbooks, prose and fictional works published between 17th and 19th centuries. In the evaluation, the semantic lexicon achieved a lexical coverage of 98.49% on the BNC sampler, 95.38% on the METER Corpus and 92.76% -- 97.29% on the historical data. Our evaluation reveals that the Lancaster semantic lexicon has a remarkably high lexical coverage on modern English lexicon, but needs expansion with domain-specific terms and historical words. Our evaluation also shows that, in order to make claims about the lexical coverage of annotation systems as well as to render them ‘future proof’, we need to evaluate their potential both synchronically and diachronically across genres.

2003

pdf bib
Extracting Multiword Expressions with A Semantic Tagger
Scott S. L. Piao | Paul Rayson | Dawn Archer | Andrew Wilson | Tony McEnery
Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment

2000

pdf bib
Comparing Corpora using Frequency Profiling
Paul Rayson | Roger Garside
The Workshop on Comparing Corpora

Search
Co-authors