Fabrizio Falchi
2024
You Write like a GPT
Andrea Esuli
|
Fabrizio Falchi
|
Marco Malvaldi
|
Giovanni Puccetti
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
We investigate how Raymond Queneau’s Exercises in Style are evaluated by automatic methods for detection of artificially-generated text. We work with the Queneau’s original French version, the Italian translation by Umberto Eco andthe English translation by Barbara Wright.We start by comparing how various methods for the detection of automatically generated text, also using different large language models and evaluate the different styles in the opera. We then link this automatic evaluation to distinct characteristic related to content and structure of the various styles.This work is an initial attempt at exploring how methods for detection artificially-generated text can find application as tools to evaluate the qualities and characteristics of human writing, to support better writing in terms of originality, informativeness, clarity.
2021
AIMH at SemEval-2021 Task 6: Multimodal Classification Using an Ensemble of Transformer Models
Nicola Messina
|
Fabrizio Falchi
|
Claudio Gennaro
|
Giuseppe Amato
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
This paper describes the system used by the AIMH Team to approach the SemEval Task 6. We propose an approach that relies on an architecture based on the transformer model to process multimodal content (text and images) in memes. Our architecture, called DVTT (Double Visual Textual Transformer), approaches Subtasks 1 and 3 of Task 6 as multi-label classification problems, where the text and/or images of the meme are processed, and the probabilities of the presence of each possible persuasion technique are returned as a result. DVTT uses two complete networks of transformers that work on text and images that are mutually conditioned. One of the two modalities acts as the main one and the second one intervenes to enrich the first one, thus obtaining two distinct ways of operation. The two transformers outputs are merged by averaging the inferred probabilities for each possible label, and the overall network is trained end-to-end with a binary cross-entropy loss.
Search
Fix data
Co-authors
- Giuseppe Amato 1
- Andrea Esuli 1
- Claudio Gennaro 1
- Marco Malvaldi 1
- Nicola Messina 1
- show all...