2024
pdf
bib
SPARQL can also talk in Portuguese: answering natural language questions with knowledge graphs
Elbe Miranda
|
Aline Paes
|
Daniel de Oliveira
Proceedings of the 16th International Conference on Computational Processing of Portuguese - Vol. 1
pdf
bib
Exploring Portuguese Hate Speech Detection in Low-Resource Settings: Lightly Tuning Encoder Models or In-Context Learning of Large Models?
Gabriel Assis
|
Annie Amorim
|
Jonnathan Carvalho
|
Daniel de Oliveira
|
Daniela Vianna
|
Aline Paes
Proceedings of the 16th International Conference on Computational Processing of Portuguese - Vol. 1
pdf
bib
abs
Enhancing Sentence Simplification in Portuguese: Leveraging Paraphrases, Context, and Linguistic Features
Arthur Scalercio
|
Maria Finatto
|
Aline Paes
Findings of the Association for Computational Linguistics: ACL 2024
Automatic text simplification focuses on transforming texts into a more comprehensible version without sacrificing their precision. However, automatic methods usually require (paired) datasets that can be rather scarce in languages other than English. This paper presents a new approach to automatic sentence simplification that leverages paraphrases, context, and linguistic attributes to overcome the absence of paired texts in Portuguese.We frame the simplification problem as a textual style transfer task and learn a style representation using the sentences around the target sentence in the document and its linguistic attributes. Moreover, unlike most unsupervised approaches that require style-labeled training data, we fine-tune strong pre-trained models using sentence-level paraphrases instead of annotated data. Our experiments show that our model achieves remarkable results, surpassing the current state-of-the-art (BART+ACCESS) while competitively matching a Large Language Model.
pdf
bib
abs
Analysis of Material Facts on Financial Assets: A Generative AI Approach
Gabriel Assis
|
Daniela Vianna
|
Gisele L. Pappa
|
Alexandre Plastino
|
Wagner Meira Jr
|
Altigran Soares da Silva
|
Aline Paes
Proceedings of the Joint Workshop of the 7th Financial Technology and Natural Language Processing, the 5th Knowledge Discovery from Unstructured Data in Financial Services, and the 4th Workshop on Economics and Natural Language Processing
Material facts (MF) are crucial and obligatory disclosures that can significantly influence asset values. Following their release, financial analysts embark on the meticulous and highly specialized task of crafting analyses to shed light on their impact on company assets, a challenge elevated by the daily amount of MFs released. Generative AI, with its demonstrated power of crafting coherent text, emerges as a promising solution to this task. However, while these analyses must incorporate the MF, they must also transcend it, enhancing it with vital background information, valuable and grounded recommendations, prospects, potential risks, and their underlying reasoning. In this paper, we approach this task as an instance of controllable text generation, aiming to ensure adherence to the MF and other pivotal attributes as control elements. We first explore language models’ capacity to manage this task by embedding those elements into prompts and engaging popular chatbots. A bilingual proof of concept underscores both the potential and the challenges of applying generative AI techniques to this task.
pdf
bib
abs
BAMBAS at SemEval-2024 Task 4: How far can we get without looking at hierarchies?
Arthur Vasconcelos
|
Luiz Felipe De Melo
|
Eduardo Goncalves
|
Eduardo Bezerra
|
Aline Paes
|
Alexandre Plastino
Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
This paper describes the BAMBAS team’s participation in SemEval-2024 Task 4 Subtask 1, which focused on the multilabel classification of persuasion techniques in the textual content of Internet memes. We explored a lightweight approach that does not consider the hierarchy of labels. First, we get the text embeddings leveraging the multilingual tweets-based language model, Bernice. Next, we use those embeddings to train a separate binary classifier for each label, adopting independent oversampling strategies in each model in a binary-relevance style. We tested our approach over the English dataset, exceeding the baseline by 21 percentage points, while ranking in 23th in terms of hierarchical F1 and 11st in terms of hierarchical recall.