Joaquim Santos


2024

pdf bib
Named entity recognition specialised for Portuguese 18th-century History research
Joaquim Santos | Helena Freire Cameron | Fernanda Olival | Fátima Farrica | Renata Vieira
Proceedings of the 16th International Conference on Computational Processing of Portuguese - Vol. 1

2020

pdf bib
Embeddings for Named Entity Recognition in Geoscience Portuguese Literature
Bernardo Consoli | Joaquim Santos | Diogo Gomes | Fabio Cordeiro | Renata Vieira | Viviane Moreira
Proceedings of the Twelfth Language Resources and Evaluation Conference

This work focuses on Portuguese Named Entity Recognition (NER) in the Geology domain. The only domain-specific dataset in the Portuguese language annotated for NER is the GeoCorpus. Our approach relies on BiLSTM-CRF neural networks (a widely used type of network for this area of research) that use vector and tensor embedding representations. Three types of embedding models were used (Word Embeddings, Flair Embeddings, and Stacked Embeddings) under two versions (domain-specific and generalized). The domain specific Flair Embeddings model was originally trained with a generalized context in mind, but was then fine-tuned with domain-specific Oil and Gas corpora, as there simply was not enough domain corpora to properly train such a model. Each of these embeddings was evaluated separately, as well as stacked with another embedding. Finally, we achieved state-of-the-art results for this domain with one of our embeddings, and we performed an error analysis on the language model that achieved the best results. Furthermore, we investigated the effects of domain-specific versus generalized embeddings.

pdf bib
Word Embedding Evaluation in Downstream Tasks and Semantic Analogies
Joaquim Santos | Bernardo Consoli | Renata Vieira
Proceedings of the Twelfth Language Resources and Evaluation Conference

Language Models have long been a prolific area of study in the field of Natural Language Processing (NLP). One of the newer kinds of language models, and some of the most used, are Word Embeddings (WE). WE are vector space representations of a vocabulary learned by a non-supervised neural network based on the context in which words appear. WE have been widely used in downstream tasks in many areas of study in NLP. These areas usually use these vector models as a feature in the processing of textual data. This paper presents the evaluation of newly released WE models for the Portuguese langauage, trained with a corpus composed of 4.9 billion tokens. The first evaluation presented an intrinsic task in which WEs had to correctly build semantic and syntactic relations. The second evaluation presented an extrinsic task in which the WE models were used in two downstream tasks: Named Entity Recognition and Semantic Similarity between Sentences. Our results show that a diverse and comprehensive corpus can often outperform a larger, less textually diverse corpus, and that batch training may cause quality loss in WE models.