Pedro Silva
2026
Lost in Quantization: Activation Outliers Explain Language-Specific FP8 Sensitivity in Llama-3
Guilherme Silva | Pedro Silva | Matheus Peixoto | Gladston Moreira | Eduardo Luz
Proceedings of the 17th International Conference on Computational Processing of Portuguese (PROPOR 2026) - Vol. 1
Guilherme Silva | Pedro Silva | Matheus Peixoto | Gladston Moreira | Eduardo Luz
Proceedings of the 17th International Conference on Computational Processing of Portuguese (PROPOR 2026) - Vol. 1
Quantization is key for efficient LLM inference, but its language-specific effects are understudied. We compare INT8 and FP8 (E4M3) quantization for Meta-Llama-3-8B on English and Brazilian Portuguese (PT-BR). INT8 with outlier handling preserves perplexity in both languages, while naive FP8 casting degrades English far more than PT-BR (+18% vs. +3.9%). Activation analysis shows rarer, larger English spikes (>35) that are more prone to saturation under unscaled E4M3, whereas PT-BR activations are more concentrated. Our FP8 results reflect a naive casting stress test (no calibration/scaling), not an optimized FP8 recipe.
A Multitask Transformer for Offensive Language Detection and Target Identification in HateBR
Guilherme Silva | Pedro Silva | Matheus Peixoto | Gladston Moreira | Eduardo Luz
Proceedings of the 17th International Conference on Computational Processing of Portuguese (PROPOR 2026) - Vol. 1
Guilherme Silva | Pedro Silva | Matheus Peixoto | Gladston Moreira | Eduardo Luz
Proceedings of the 17th International Conference on Computational Processing of Portuguese (PROPOR 2026) - Vol. 1
Hate speech detection is often treated as a binary task, ignoring the hierarchical nature of toxicity, such as severity levels and specific target groups. This work presents a Multitask Learning (MTL) approach for the HateBR dataset, utilizing a shared BERTimbau encoder to simultaneously predict binary offensiveness, ordinal severity, and hate speech targets. Our experiments demonstrate that the MTL architecture outperforms Single-Task baselines on the primary offensive detection task, increasing the Matthews Correlation Coefficient from 0.80 to 0.82. Beyond predictive performance, we show that joint training implicitly enforces hierarchical sanity: the unified model yields a 0% target-inconsistency rate (i.e., no cases where a comment is predicted Non-offensive while still assigned a hate target). However, we observe negative transfer in the fine-grained multilabel target task (Micro-F1 drops from 0.59 to 0.42), highlighting a trade-off between logical consistency and target attribution under extreme imbalance.
Global vs. Local Sentence Embeddings for Brazilian Portuguese: Revisiting Monolingual Models in the Age of Foundation Models
Matheus Peixoto | Guilherme Silva | Giacomo Figueredo | Pedro Silva | Eduardo J. S. Luz
Proceedings of the 17th International Conference on Computational Processing of Portuguese (PROPOR 2026) - Vol. 1
Matheus Peixoto | Guilherme Silva | Giacomo Figueredo | Pedro Silva | Eduardo J. S. Luz
Proceedings of the 17th International Conference on Computational Processing of Portuguese (PROPOR 2026) - Vol. 1
The choice between large-scale, multilingual, foundation models and specialized monolingual models for languages like Brazilian Portuguese (PT-BR) presents a complex trade-off between generalization and specialization. This paper investigates this trade-off through an empirical study across a diverse suite of tasks. We evaluate multiple families of language models under both linear probing and fine-tuning regimes. We find that monolingual encoders exhibit greater "adaptation plasticity" during fine-tuning, improving on both classification and semantic similarity, where global (multilingual) models degrade. However, this plasticity comes at a cost: our tokenization analysis suggests that monolingual models struggle with foreign terms, whereas modern multilingual tokenizers show surprising morphological competence, challenging a long-standing assumption in the field. We conclude that the optimal model choice is a task-dependent trade-off between vocabulary coverage and adaptation flexibility.
2024
Toxic Text Classification in Portuguese: Is LLaMA 3.1 8B All You Need?
Amanda Oliveira | Pedro Silva | Vander Freitas | Valéria Santos | Gladston Moreira | Eduardo Luz
Proceedings of the 15th Brazilian Symposium in Information and Human Language Technology
Amanda Oliveira | Pedro Silva | Vander Freitas | Valéria Santos | Gladston Moreira | Eduardo Luz
Proceedings of the 15th Brazilian Symposium in Information and Human Language Technology
Evaluating Federated Learning with Homomorphic Encryption for Medical Named Entity Recognition Using Compact BERT Models
Marcos Felipe Rezende | Rodrigo Silva | Eduardo Luz | Pedro Silva
Proceedings of the 15th Brazilian Symposium in Information and Human Language Technology
Marcos Felipe Rezende | Rodrigo Silva | Eduardo Luz | Pedro Silva
Proceedings of the 15th Brazilian Symposium in Information and Human Language Technology
2023
How Good Is ChatGPT For Detecting Hate Speech In Portuguese?
Amanda Oliveira | Thiago Cecote | Pedro Silva | Jadson Gertrudes | Vander Freitas | Eduardo Luz
Proceedings of the 14th Brazilian Symposium in Information and Human Language Technology
Amanda Oliveira | Thiago Cecote | Pedro Silva | Jadson Gertrudes | Vander Freitas | Eduardo Luz
Proceedings of the 14th Brazilian Symposium in Information and Human Language Technology
2010
Building High Quality Databases for Minority Languages such as Galician
Francisco Campillo | Daniela Braga | Ana Belén Mourín | Carmen García-Mateo | Pedro Silva | Miguel Sales Dias | Francisco Méndez
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
Francisco Campillo | Daniela Braga | Ana Belén Mourín | Carmen García-Mateo | Pedro Silva | Miguel Sales Dias | Francisco Méndez
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
This paper describes the result of a joint R&D project between Microsoft Portugal and the Signal Theory Group of the University of Vigo (Spain), where a set of language resources was developed with application to Text―to―Speech synthesis. First, a large Corpus of 10000 Galician sentences was designed and recorded by a professional female speaker. Second, a lexicon with phonetic and grammatical information of over 90000 entries was collected and reviewed manually by a linguist expert. And finally, these resources were used for a MOS (Mean Opinion Score) perceptual test to compare two state―of―the―art speech synthesizers of both groups, the one from Microsoft based on HMM, and the one from the University of Vigo based on unit selection.