Andrea Favalli
2024
Investigating the Impact of Data Contamination of Large Language Models in Text-to-SQL translation
Federico Ranaldi
|
Elena Sofia Ruzzetti
|
Dario Onorati
|
Leonardo Ranaldi
|
Cristina Giannone
|
Andrea Favalli
|
Raniero Romagnoli
|
Fabio Massimo Zanzotto
Findings of the Association for Computational Linguistics: ACL 2024
Understanding textual description to generate code seems to be an achieved capability of instruction-following Large Language Models (LLMs) in zero-shot scenario. However, there is a severe possibility that this translation ability may be influenced by having seen target textual descriptions and the related code. This effect is known as Data Contamination.In this study, we investigate the impact of Data Contamination on the performance of GPT-3.5 in the Text-to-SQL code-generating tasks. Hence, we introduce a novel method to detect Data Contamination in GPTs and examine GPT-3.5’s Text-to-SQL performances using the known Spider Dataset and our new unfamiliar dataset Termite. Furthermore, we analyze GPT-3.5’s efficacy on databases with modified information via an adversarial table disconnection (ATD) approach, complicating Text-to-SQL tasks by removing structural pieces of information from the database. Our results indicate a significant performance drop in GPT-3.5 on the unfamiliar Termite dataset, even with ATD modifications, highlighting the effect of Data Contamination on LLMs in Text-to-SQL translation tasks.
2022
Every time I fire a conversational designer, the performance of the dialogue system goes down
Giancarlo Xompero
|
Michele Mastromattei
|
Samir Salman
|
Cristina Giannone
|
Andrea Favalli
|
Raniero Romagnoli
|
Fabio Massimo Zanzotto
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Incorporating handwritten domain scripts into neural-based task-oriented dialogue systems may be an effective way to reduce the need for large sets of annotated dialogues. In this paper, we investigate how the use of domain scripts written by conversational designers affects the performance of neural-based dialogue systems. To support this investigation, we propose the Conversational-Logic-Injection-in-Neural-Network system (CLINN) where domain scripts are coded in semi-logical rules. By using CLINN, we evaluated semi-logical rules produced by a team of differently-skilled conversational designers. We experimented with the Restaurant domain of the MultiWOZ dataset. Results show that external knowledge is extremely important for reducing the need for annotated examples for conversational systems. In fact, rules from conversational designers used in CLINN significantly outperform a state-of-the-art neural-based dialogue system when trained with smaller sets of annotated dialogues.